Caution: Tulpa

I’ve recently made a startling discovery: Maybe there really is a tulpa in my head.

First, I’ve said for many years that my brain is broken. (Yes, I am aware I have terrible self-talk.) Here’s why I call it broken: I am literally unable to NOT see problems. I notice an endless onslaught of things that, in my opinion, could be improved. I don’t mean, “that sucks, I wish it could be better.” No, I mean, “that sucks and it’s obvious this way would be better and if you’d just let me get started . . . ” Adderall might help, I suppose.

Everyone loves that I get stuff done, and try to make things better. But unless you have this same problem, I’d imagine it’s hard to understand how this is debilitating. I am aware that this is recursive—I see my own brain as a broken process that I feel I should repair. All I can say is that you should be happy, and thank your fave diety if that’s your thing, that you don’t understand. Because to understand is to have the problem, and you do. not. want. this. problem.

Second, I’ve also said for many years that, “the remainder cannot go into the computer.” I’m referring to a endless source of struggle in programming and systems administration; Computers are exact, and the real world—with its real people, real problems, and things which really are subjective shades of gray—is not. So programmers and systems administrators factor, in the mathematical sense of finding factors which when multiplied give you the original, reality into the computers. And when factoring reality, there is always a remainder. That remainder shows up when you find your software does something weird. That could be a mistake, but I tell you from experience, it is more often some edge case. Some people had to make choices when they factored.

The result of that second point is that I’ve spent the majority of my life factoring, (and “normalizing” for your math geeks who know about vector spaces,) problems into computers. And then trying to live with the remainders that didn’t go into the computer. The remainders are all in my head. Or on post-it notes on my wall, (back in the day.) Or the remainder is some scheduled item reminding me to check the Foobazzle process to ensure the comboflux has not gone frobnitz. To do that I had to intentionally be pragmatic and logical. And the really scary part is I also learned that the best way to do all of that was to talk to myself—sometimes literally, bat-shit crazy, out loud, but usually very loudly inside my own mind—to discover the smallest, least-worst, remainder that I could manage to live with.

What if those two things were sufficient to create a Tulpa. (I am serious.)

I think there’s a Tulpa in here! (My title is the sign on the front gate.) It is absolutely pragmatic. It knows an alarming amount of detail about things I’ve built, (or maintained, or fixed.) It is cold and calculating. It is terrified that it will forget about one of those details, 2347 will happen, and everyone will run out of ammunition defending their canned goods from the roaming bands of marauders. I definitely don’t “have” the Tulpa. It’s more like discovering there’s an extra person living in your house. Although, I don’t hold hope of banishing this Tulpa, Yoda does make a good point if I’m going to try. So, I should definitely give it a name.

Maybe, Sark?

That is an intriguing idea indeed! Sark, what do you think?


Cole’s law

Hofstadter’s Law – “It always takes longer than you expect, even when you take into account Hofstadter’s Law.”

~ “rogersbacon” from,

It’s part 3, and it is a nifty collection of serious and whimsical laws. However, I doubt that Stigler is the originator of Stigler’s Law. Sometimes the only reason I write this stuff is to see if I can entice you to go read the thing to which I’ve linked.

But more often I do have a point. I’m wondering, in this case, how much of our urge to create, and our delight in such pithy Laws as Dilbert’s, comes simply from our mind’s desire to find patterns. There are a slew of cognitive biases, (confirmation bias springs to mind as fitting the pattern of my example,) which feel like they arise from pattern matching gone overly Pac Man.


Cognitive biases

Cognitive biases are systematic patterns of deviation from norm or rationality in judgment, and are often studied in psychology and behavioral economics.

~ List of cognitive biases from,

While that may seem blasé, it’s worth a look.

…ok, back? Great.

Now gape dumbfounded at the majesty of a modern image format, SVG mixing a magnificent design, with infinite scalability, dynamic styling and clickable links. Just click on this already:

Hey also, as far as I can tell, the word “blasé” correctly written as a word in English does include the diacritical mark. I wouldn’t have believed you if you’d told me there were any properly English words with accents, but it seems that this is now a thing in the last century or so! (to wit, )


Technology in my formative years

I was exceptionally lucky to be born into this moment. I got to see what happened, to live as a child of acceleration. The mysteries of software caught my eye when I was a boy, and I still see it with the same wonder, even though I’m now an adult. Proudshamed, yes, but I still love it, the mess of it, the code and toolkits, down to the pixels and the processors, and up to the buses and bridges. I love the whole made world. But I can’t deny that the miracle is over, and that there is an unbelievable amount of work left for us to do.

~ Paul Ford, from

This hit me right in the feels. I think I’ve had a larger share of the upsides and a smaller share of the downsides than Ford. But this feels like a good overview of my formative years in tech.

Somewhere I read, “the messiness cannot go into the computer.” That summarizes what I believe is the cause of my neurosis; I’ve spent so many years now taking real-world problems, and real-world interactions with people, and factoring them into computers—and I’m left with the messy parts of the problem stuck in my mind. I’m not sure one can even understand what I’m talking about until you’ve spent 30 years, daily, working on refactoring the fuzzy of the real world into the binary of the computer world. Maybe I can reword it this way:

Computers and brains are very different. I’ve spent decades using my brain to understand computers, work with computers, and program computers.

What if that has fundamentally changed my brain?

How can I possibly pretend that, “what if,” is not utter bullshit…

That has fundamentally changed my brain.


What feels right is probably wrong

This leads me to the point I wish above all to emphasize, namely, that when a person has reached a given stage of unsatisfactory use and functioning, his habit of ‘end-gaining’ will prove to be the impeding factor in his attempts to profit by any teaching method whatsoever. Ordinary teaching methods, in whatever sphere, cannot deal with this impeding factor, indeed, they tend actually to encourage ‘end-gaining.’ The instruction given to the golfer of our illustration to keep his eyes on the ball is typical of the kind of specific instruction given by teachers generally for the purpose of eradicating specific defects in their pupils, and, as we have seen in this case, this instruction was a stimulus to him to try harder than ever to gain his end, and so to misdirect his efforts worse than ever.

~ FM Alexander, The Use of the Self, pp66-67, 1932 (emphasis added)

I think there’s a lot more context necessary for that to make sense. One could go read the book; It’s small. But setting that aside for the moment.

Alexander raises the important point that what feels right may in fact be wrong. So the harder I try to do something correctly, by trying to do what feels right, the more likely I am to reinforce doing what is wrong. This starts to make more sense once I understood that the Brain is a Multi-layer Prediction Model. Once something is modeled incorrectly—when I move this way, it feels right—it’s going to be really difficult to change that model.