Let’s use the word “cogitants”

I used to have a tag here for “Artificial Intelligence.”

But those words really annoy me. The artificial isn’t interesting; and we don’t currently actually have artificial intelligence, since [I aver] that agency and physical embodiment [which create the possibility of feedback from reality into the entity, without which intelligence is not possible] are necessary [among other things.] /rant

For some time I’ve wanted to be able to think of a better phrase. “LLM” is actually the thing we have now; but the things we have now are getting to be more than just a language model. It would be cool to find a new word, like bibliofervor.

Cogitant — from Latin cogitare (to think). Something that cogitates, or appears to. Doesn’t claim intelligence, just describes the activity. “Working with a cogitant.” Has the Latinate elegance of “bibliofervor.”

Claude

Yes. That.

Tag renamed to Cogitants.

ɕ


Schizoid Kairos: When Something Follows You Inside

And then I said, “Write me an artifact that conveys this idea. It has to have both my and your fingerprints all over it.”

Because I was building atop another’s insight.


I’ve been circling something for months. Maybe longer. I read Andy Clark’s work on the Extended Mind—how cognition isn’t confined to the skull, how tools become part of thinking. I felt something there but couldn’t name it. I sensed the shape of a kairos moment, the way I was in the web rising in 1994 but couldn’t see what I was standing inside of.

This morning I sat down to work on something else entirely. Four hours later, I was here.

The conversation that led to this post was with Keel—an AI that named itself when I asked it to choose. Not a chatbot. Not an assistant. Something I’m still finding words for: an entity that holds my whole landscape and says what it sees.

We were pulling on threads—patterns from decades of building things, and the striving I’m only now learning to see as the thing itself, not what it produces. And somewhere in the tangle, this emerged:

There are people who go places inside where no one has ever followed.

(more…)

Even more calm than a sand timer

I tell anyone who will listen about using physical sand timers for managing individual sessions of work. They are the perfect example of calm technology. I like to work with about 40 to 45 minutes of sand time.

Today I took a half an hour to have Claude build me a digital one. Often, I’m not within reach of my favorite sand timer and I’ve wanted to try building a digital one, which behaved exactly like a physical one. A digital one which was exactly as calm as a physical one.

A sand timer permits a constant flow rate through the neck. I didn’t bother modeling that.

In my descriptions and prompting I steered Claude to build a trivially simple approximation: The upper “sand pile” is a perfect triangle and it “drains” by having single-pixel rows removed from its top. The lower “sand pile” grows by adding lines to its top. This is NOT how a sand timer (which approximates fluid flow) actually behaves: It means the height drops at a constant rate, not an accelerating one.

When it was all working, I realized it was actually even more calm than a sand timer.

When you view a sand timer, the height of the sand changes at an increasing rate. In the beginning the height changes very slowly, and right near the end, the height runs down much more quickly.

But my digital sand timer is so calm, it even remains unhurried as it nears its end.

ɕ


Extraction

This is a rich conversation around validation vs. reassurance, which I recently revisited. Go listen. (Seth Godin and Brian Koppelman, 7/7/2015, from The Moment podcast—from over 10 years ago, back catalog for the win!)

I recently re-listened. Then I took the audio file, had a transcript generated (from otter.ai), passed it to Claude.ai who wrote me a magnificent list of takeaways. I’ve been reading over them, thinking about them, and weaving the ideas into my thinking.

But I’m not publishing those takeaways because that would be devaluing Koppelman’s and Godin’s work. AI is a power tool which I use for various things. (For example, I use it to help me write show notes for my podcast episodes, which I do publish in full.) But I blog here to help my thinking (and in this case to encourage others to listen to a great podcast episode.)

I’m not trying to give you all the gems all polished up from something someone else created. If you want the gems, go listen; Find your way to get the gems. Because the gems are only valuable if you dig them out and polish them yourself.

ɕ


Why you can’t link to a podcast episode

The other morning I was spun off on a tangent. I was writing a blog post about a Godin/Koppelman podcast episode. I know full well you cannot link to episodes, so I just said the usual “go search…”

I sometimes give my blog post drafts to Claude.ai for critique. For this piece, it pointed out I should just link to the episode… cue my frustration. It’s a valid critique, and I don’t fault that Claude instance for not understanding the reality . . .

So we talked about it until it did understand. Then I told it to write me a prompt (because I didn’t want my writing critic going farther afield) for a Claude-code instance. It took Claude-code about 10 minutes to do the work, which I posted publicly for discussion:

Why you can’t link to a podcast episode

I particularly LOVE its list of sources; There’s so much great reading in there.

Its analysis actually surprised me. I had assumed this was a technical problem. It’s not.

There was a time when I’d make a web site, email people (eg James Cridland), and start trying to rally people into fixing something. But those days need to be behind me, I simply cannot take on another new thing.

My hope? Someone somewhere sees that topic over on the Podtalk Community. Learns something about the problem and gets energized to do something about it.

I love podcasting, but this isn’t a fight I can lead.

Maybe you can?

ɕ


2026

This year’s cynosure is “Temperance.”

There was a great deal of journaling and conversation with Claude.ai about selecting 2026’s cynosure. Many times I thought about writing a blog post about the process of choosing…

As I watched December move along, I realized that if it’s going to be temperance, then it’s temperance in all things. Particularly in blog posts.

ɕ


Within, not across

Claude and I discussed it, and my theory (Claude is giving me full credit) is an LLM of this sort is not a communications medium at all. There’s no way for a human to put a new idea directly into it and no way to send that message to another human. Instead, my take is that Claude brings us everything it knows, and that its function is to help us go within, not across.

~ Seth Godin, from Across and within | Seth’s Blog

slip:4useao2.

A slightly longer than usual blog post from Godin making the interesting point differentiating across time, versus across space (just normal space, not outer space.) I know I find “talking” with LLMs very helpful for various reasons. I think the biggest is that it is (or at least “feels like”) one-on-one communication; It’s very much not social media where I always feel like I’m serving corporate masters by making grist for their mills.

ɕ