A package deal

I often find myself drawn into looking at what other creatives are looking at; I find interest in that second degree of separation. I may be interested in a particular creative person, but only if I’m interested in their specific work. But nearly every creative person I encounter, I’m always asking (literally, or in my internal dialog): Where did they get that idea? What were the inspirations that led to that composition. I suppose that’s right next to being interested in the creative process itself—but that’s not quite it. I don’t really want to know how they do what they do. I want to know who they are, and why they do what they do.

The key thing is that unique minds have to be accepted as a full package, because the things they do well and that we admire cannot be separated from the things we wouldn’t want for ourselves or look down upon.

~ Morgan Housel from, https://collabfund.com/blog/wild-minds/

I think it was Homer (Simpson, I mean) who said, just because you are unique, doesn’t mean you are useful. That too harsh by half. It’s not necessary that one be useful (but it’s nice if you want to be able to say, buy food or put a roof over your head.) I want to push back against ‘ol Homer there and amend that to be: Just because you are unique, doesn’t mean people will understand you.



I’m intrigued by the word hunger. It can convey so much more than the simple hunger for food. It’s power begins to show when deployed as hunger for nourishment— Hunger for freedom— Hunger for power. For as long as I can remember I’ve struggled with body image. I feel like that’s a better way to convey the feeling instead of a more surface-level, “struggled with weight.” Only a precious few times (in my 50+ laps around ‘ol Sol) have things around me lined up, juust right, and I’ve found myself in a shape to my liking; Found myself in a shape that enabled me to do what I wanted.

Hunger isn’t in your stomach or your blood-sugar levels. It’s in your mind—and that’s where we need to shape up.

~ Michael Graziano from, https://aeon.co/essays/hunger-is-psychological-and-dieting-only-makes-it-worse

The word I’ve been meditating on recently is ease. To avoid hunger (not just hunger for food, all the hungers) I must be in ease. Easy to say, impossible to do, but just maybe it’s be–able.


Acoustic ecology

I love a scenic overlook, but give me a few minutes and I’ll be sitting with my eyes closed listening to the scenic overlook. I once dove in the ocean at the edge of the continental shelf—it’s a long story—but the sense of lack of place when you gaze into the abyss is unsettling. Sitting and listening to a vast landscape is the closest I’ve ever come to that. (And without feeling like complete panic is right behind the veneer of my thoughts.)

The World Soundscape Project worked from the basis that any given soundscape (or sonic environment) is a representation of how that environment is perceived by listeners within it. Soundscapes are themselves influenced by human behaviours. As a combination of all sound within a particular location, soundscapes may therefore comprise natural sounds as well as those from social and technological sources. As these sounds change, so does the ecology of the soundscape.

~ Neil Clarke from, https://earth.fm/details/acoustic-ecology-and-the-world-soundscape-project/

Soundscapes are amazing. I’ve always been fascinated by sound, and how our aural sense is a very old sense; it is connected to a much older part of our brain. Sound is very important to our sense of being. We hear in the womb, and at twilight our hearing recedes last to gracefully ring down the final curtain.


Never say never

Is there a term for applying the Socratic method on oneself? Maybe, autosocraticism? Not simply self-examination or self-inquiry, but rather when you find yourself speaking with someone and realize you’ve just deployed the Socratic Method on yourself? Because this happens to me. I’m explaining something I’m thinking about, and I realize I actually don’t understand what I’m thinking about. (This is very close to “rubber duck debugging” where you can sometimes find the source of a problem by explaining it to a rubber duck. Yes, really.)

Also, a pull-quote is a self-quotation; a selection from the thing itself, presented earlier to suggest reading on is worthwhile.

And of course, I also need the past tense verb-form of that noun, just so I can write the sentence I really want to start with:

The other day I autosocratisized myself into realizing I had no freakin’ clue what the difference is between a pull-quote and a blockquote.

All of which confirms the (usually unspoken) truism about humans – we’re often wrong but never in doubt. We’re as sure of the future of our relationships as we are that 2+2=4.

~ Bob Seawright from, https://rpseawright.wordpress.com/2018/06/23/proof-negative-2/

Never say never. I’m often wrong and frequently in doubt.

Also, a pull-quote is a self-quotation; a selection from the thing itself, presented earlier to suggest reading on is worthwhile. Versus a blockquote; something quoted from another source, but which is too large to be just dropped inline wrapped in quotation marks.



When we consider consciousness, a number of questions naturally arise. Why did consciousness develop? What is consciousness good for? If consciousness developed to help us plan and act for the future, why is consciousness so difficult to control? Why is mindfulness so hard? And for that matter, if our actions are under our conscious control, why is dieting (and resisting other urges) so difficult for most of us?

Why does it appear that we are observers, peering out through our eyes at the world while sitting in the proverbial Cartesian theater? Why do we speak, in William James’s words, of a “stream of consciousness”? Can we perform complicated activities (such as driving) without being consciously aware of it?

Are animals conscious (and if so, which ones)? Are there developmental, neurologic, or psychiatric disorders that are actually disorders of consciousness?

There have, of course, been many answers to these questions over the last 2500 years. We hope to provide new answers to these and a number of related questions in this paper.

~ Andrew Budson, et al from, https://journals.lww.com/cogbehavneurol/Fulltext/2022/12000/Consciousness_as_a_Memory_System.5.aspx

A longer pull-quote than usual for me. But it’s from a 30,000 word article. o_O

That list of questions reads like the Table of Contents from the Owners Manual that my body didn’t come with. It’s a big deal that there might be an answer to just one of them, let alone the claim in the last sentence, “We hope to provide new answers to these and a number of related questions.”

Having now read some of those plausible answers to those questions—including rebuttals and improvements to some others’ answers to those questions—my take away is: Oddly, I am now less interested in those questions.


The gaming problem

As the power of AI grows, we need to have evidence of its sentience. That is why we must return to the minds of animals.

~ Kristin Andrews and Jonathan Birch from, https://aeon.co/essays/to-understand-ai-sentience-first-understand-it-in-animals

This article ate my face. I was scrolling through a long list of things I’d marked for later reading, I glanced at the first paragraph of this article… and a half-hour later I realized it must be included here. I couldn’t even figure out what to pull-quote because that requires choosing the most-important theme. The article goes deeply into multiple intriguing topics, including sentience, evolution, pain, and artificial intelligence. I punted and just quoted the sub-title of the article.

The biggest new-to-me thing I encountered is a sublime concept called the gaming problem in assessing sentience. It’s about gaming, in the sense of “gaming the system of assessment.” If you’re clicking through to the article, just ignore me and go read…

…okay, still here? Here’s my explanation of the gaming problem:

Imagine you want to wonder if an octopus is sentient. You might then go off and perform polite experiments on octopods. You might then set about wondering what your experiments tell you. You might wonder if the octopods are intelligent enough to try to deceive you. (For example, if they are intelligent enough, they might realize you’re a scientist studying them, and that convincing you they are sentient and kind, would be in their best interest.) But you definitely do not need to wonder if the octopods have studied all of human history to figure out how to deceive you—they definitely have not because living in water they have no access to our stored knowledge. Therefore, when studying octopods, you do not have to worry about them using knowledge of humans to game your system of study.

Now, imagine you want to wonder if an AI is sentient. You might wonder will the AI try to deceive you into thinking it’s sentient when it actually isn’t. We know that we humans deceive each other often; We write about it a lot, and our deception is seen in every other form of media too. Any AI created by humans will have access to a lot (most? all??) of human knowledge and would therefore certainly have access to plenty of information about how to deceive a person, what works, and what doesn’t. So why would an AI not game your system of study to convince you it is sentient?

That’s the gaming problem in assessing sentience.


Oh crap now I think I have insomnia

My dad used to suffer from insomnia, holding imaginary meetings in his head late into the night. I’m the same way.

~ “AllAmericanBreakfast” from, https://www.lesswrong.com/posts/JK7KF9AWBpjbZqTDn/mental-nonsense-my-anti-insomnia-trick

I read that short article and now I think I have insomnia. Sometimes, anyway. Clearly I’m a hypochondriac though. But in all seriousness, the author suggests something that—dare I say it?—I almost hope I have trouble getting to sleep tonight so I can try it.


The why

People think of that complexity as an expression of our capacity for abstract thought. We believe our brains are so complex because of the wonders we can build in our minds. Make no mistake, we can build wonders in our minds but what we have neglected is that those wonders are boot strapped on top of motor control. The first purpose of the brain is to guide movement.

~ Rafe Kelley from, https://www.evolvemoveplay.com/the-why-of-movement-practice/

And the second purpose it to solve problems in the physical world. (How do I go over there? How do I avoid that danger? How do I get food?) To solve problems you need to be able to define what the problem is. You define any problem by imagining some desired state (I am over there. I have avoided that danger by running. I have eaten that food.) and then looking for options that can get you from the current state, to the desired state. So it turns out that the better your imagination is, the better you can be at solving problems. Faced with endless options, you mind turns out to be really good at heuristics—making estimations in advance with limited knowledge (prejudice can be a good thing; assuming snakes are not friendly is an excellent heuristic.) All of which makes possible the beauty and diversity of our lives. Fortunately, we have a capacity for reason atop all of that which enables us to make choices so the possibility of beauty and diversity can be our a reality. I digress.

Back to Kelley’s point, if the entire edifice of our minds is built upon that first purpose, what happens if we starve the mind of the physical engagement?


I’m not being hyperbolic

Why is it so difficult to make choices that we know will be best for us in the long run?

~ Peter Attia from, https://peterattiamd.com/hyperbolic-discounting-friend-and-foe-of-goal-achievement/

Sorry for the titular word play. This should be read foremost to understand exponential versus hyperbolic decay, and then to understand how to get your future self to do what your current self wishes. Attia explains it in the context of imagining future rewards. It turns out that using one (to assess the value of future rewards) makes actual sense, and the other turns out to be how our brains work (because: survival drove evolution).

Snoring? No really, go read it. Because if you understand the two methods you can hack yourself by setting up your goals to play into your mind’s predilection to make the wrong value calculation. In effect, rather than set things up the way that makes sense which frequently leads to failure thanks to our brains, we set things up in a more complicated way to fake ourselves into getting where we want to go.


Geometry of thought

It’s really structure that I keep circling back to (note that word: circle). How do we structure our moving, changing thoughts and how do we structure the world we design and move and act in?

~ Barbara Tversky from, https://www.edge.org/conversation/barbara_tversky-the-geometry-of-thought

This article is a delightful deep dive into how movement and thought are interrelated. This is a topic near and dear to my heart. I once had the sublime experience of having a podcast guest say that he used to think to figure out how to move, but now he moves in order to think.


What is intelligence

What do we mean when we say the word “intelligence”? The immune system is the fascinating, distributed, mobile, circulating system that learns and teaches at the level of the cell. It has memory, some of which lasts our entire life, some of which has to be refreshed every twenty years, every twelve years, a booster shot every six years. This is a very fascinating component of our body’s intelligence that, as far as we know, is not conscious, but even that has to be questioned and studied.

~ Caroline Jones from, https://www.edge.org/conversation/caroline_a_jones-questioning-the-cranial-paradigm

I often think of my conscious self as the only part of me that really matters. But when I read articles like this one I’m reminded of the rider-and-elephant metaphor. A case can even be made that our entire body, consciousness, and societies at large are just very clever ways for all the non-human things that live in our guts to get moved around… a sort of human-body-as-spaceship, for microbes.


Caution: Tulpa

I’ve recently made a startling discovery: Maybe there really is a tulpa in my head.

First, I’ve said for many years that my brain is broken. (Yes, I am aware I have terrible self-talk.) Here’s why I call it broken: I am literally unable to NOT see problems. I notice an endless onslaught of things that, in my opinion, could be improved. I don’t mean, “that sucks, I wish it could be better.” No, I mean, “that sucks and it’s obvious this way would be better and if you’d just let me get started . . . ” Adderall might help, I suppose.

Everyone loves that I get stuff done, and try to make things better. But unless you have this same problem, I’d imagine it’s hard to understand how this is debilitating. I am aware that this is recursive—I see my own brain as a broken process that I feel I should repair. All I can say is that you should be happy, and thank your fave diety if that’s your thing, that you don’t understand. Because to understand is to have the problem, and you do. not. want. this. problem.

Second, I’ve also said for many years that, “the remainder cannot go into the computer.” I’m referring to a endless source of struggle in programming and systems administration; Computers are exact, and the real world—with its real people, real problems, and things which really are subjective shades of gray—is not. So programmers and systems administrators factor, in the mathematical sense of finding factors which when multiplied give you the original, reality into the computers. And when factoring reality, there is always a remainder. That remainder shows up when you find your software does something weird. That could be a mistake, but I tell you from experience, it is more often some edge case. Some people had to make choices when they factored.

The result of that second point is that I’ve spent the majority of my life factoring, (and “normalizing” for your math geeks who know about vector spaces,) problems into computers. And then trying to live with the remainders that didn’t go into the computer. The remainders are all in my head. Or on post-it notes on my wall, (back in the day.) Or the remainder is some scheduled item reminding me to check the Foobazzle process to ensure the comboflux has not gone frobnitz. To do that I had to intentionally be pragmatic and logical. And the really scary part is I also learned that the best way to do all of that was to talk to myself—sometimes literally, bat-shit crazy, out loud, but usually very loudly inside my own mind—to discover the smallest, least-worst, remainder that I could manage to live with.

What if those two things were sufficient to create a Tulpa. (I am serious.)

I think there’s a Tulpa in here! (My title is the sign on the front gate.) It is absolutely pragmatic. It knows an alarming amount of detail about things I’ve built, (or maintained, or fixed.) It is cold and calculating. It is terrified that it will forget about one of those details, 2347 will happen, and everyone will run out of ammunition defending their canned goods from the roaming bands of marauders. I definitely don’t “have” the Tulpa. It’s more like discovering there’s an extra person living in your house. Although, I don’t hold hope of banishing this Tulpa, Yoda does make a good point if I’m going to try. So, I should definitely give it a name.

Maybe, Sark?

That is an intriguing idea indeed! Sark, what do you think?


Cole’s law

Hofstadter’s Law – “It always takes longer than you expect, even when you take into account Hofstadter’s Law.”

~ “rogersbacon” from, https://www.lesswrong.com/posts/Mt3vtAQGnkA3hY5Ga/eponymous-laws-part-3-miscellaneous

It’s part 3, and it is a nifty collection of serious and whimsical laws. However, I doubt that Stigler is the originator of Stigler’s Law. Sometimes the only reason I write this stuff is to see if I can entice you to go read the thing to which I’ve linked.

But more often I do have a point. I’m wondering, in this case, how much of our urge to create, and our delight in such pithy Laws as Dilbert’s, comes simply from our mind’s desire to find patterns. There are a slew of cognitive biases, (confirmation bias springs to mind as fitting the pattern of my example,) which feel like they arise from pattern matching gone overly Pac Man.


Cognitive biases

Cognitive biases are systematic patterns of deviation from norm or rationality in judgment, and are often studied in psychology and behavioral economics.

~ List of cognitive biases from, https://en.wikipedia.org/wiki/List_of_cognitive_biases

While that may seem blasé, it’s worth a look.

…ok, back? Great.

Now gape dumbfounded at the majesty of a modern image format, SVG mixing a magnificent design, with infinite scalability, dynamic styling and clickable links. Just click on this already:


Hey also, as far as I can tell, the word “blasé” correctly written as a word in English does include the diacritical mark. I wouldn’t have believed you if you’d told me there were any properly English words with accents, but it seems that this is now a thing in the last century or so! (to wit, https://www.quora.com/What-are-the-English-words-that-take-an-acute-grave-accent-or-any-other-diacritics-if-you-prefer )


Technology in my formative years

I was exceptionally lucky to be born into this moment. I got to see what happened, to live as a child of acceleration. The mysteries of software caught my eye when I was a boy, and I still see it with the same wonder, even though I’m now an adult. Proudshamed, yes, but I still love it, the mess of it, the code and toolkits, down to the pixels and the processors, and up to the buses and bridges. I love the whole made world. But I can’t deny that the miracle is over, and that there is an unbelievable amount of work left for us to do.

~ Paul Ford, from https://www.wired.com/story/why-we-love-tech-defense-difficult-industry/

This hit me right in the feels. I think I’ve had a larger share of the upsides and a smaller share of the downsides than Ford. But this feels like a good overview of my formative years in tech.

Somewhere I read, “the messiness cannot go into the computer.” That summarizes what I believe is the cause of my neurosis; I’ve spent so many years now taking real-world problems, and real-world interactions with people, and factoring them into computers—and I’m left with the messy parts of the problem stuck in my mind. I’m not sure one can even understand what I’m talking about until you’ve spent 30 years, daily, working on refactoring the fuzzy of the real world into the binary of the computer world. Maybe I can reword it this way:

Computers and brains are very different. I’ve spent decades using my brain to understand computers, work with computers, and program computers.

What if that has fundamentally changed my brain?

How can I possibly pretend that, “what if,” is not utter bullshit…

That has fundamentally changed my brain.


What feels right is probably wrong

This leads me to the point I wish above all to emphasize, namely, that when a person has reached a given stage of unsatisfactory use and functioning, his habit of ‘end-gaining’ will prove to be the impeding factor in his attempts to profit by any teaching method whatsoever. Ordinary teaching methods, in whatever sphere, cannot deal with this impeding factor, indeed, they tend actually to encourage ‘end-gaining.’ The instruction given to the golfer of our illustration to keep his eyes on the ball is typical of the kind of specific instruction given by teachers generally for the purpose of eradicating specific defects in their pupils, and, as we have seen in this case, this instruction was a stimulus to him to try harder than ever to gain his end, and so to misdirect his efforts worse than ever.

~ FM Alexander, The Use of the Self, pp66-67, 1932 (emphasis added)

I think there’s a lot more context necessary for that to make sense. One could go read the book; It’s small. But setting that aside for the moment.

Alexander raises the important point that what feels right may in fact be wrong. So the harder I try to do something correctly, by trying to do what feels right, the more likely I am to reinforce doing what is wrong. This starts to make more sense once I understood that the Brain is a Multi-layer Prediction Model. Once something is modeled incorrectly—when I move this way, it feels right—it’s going to be really difficult to change that model.


How the brain really works, butterflies

The key insight: the brain is a multi-layer prediction machine. All neural processing consists of two streams: a bottom-up stream of sense data, and a top-down stream of predictions. These streams interface at each level of processing, comparing themselves to each other and adjusting themselves as necessary.

~ Scott Alexander from, http://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/

This seems to be rocking my world. There’s an actual theory of how the brain works?

So we are likely getting more lead, more omega-6 (and relatively less omega-3), and less lithium than people in 1850. If there has been an increase in crime and other undesirable/impulsive behaviors, I think these biological insults are at least as worthy of examination as political changes that have occurred during that time.

~ Scott Alexander from, http://slatestarcodex.com/2014/02/18/proposed-biological-explanations-for-historical-trends-in-crime/

…and my brain thought that this (aside from the actual data and science in the article) seems like a very compelling look at the big scale; tiny changes making subtle tidal shifts at the hundreds-of-millions-of-people scale.

Butterflies and radar: http://www.atlasobscura.com/articles/butterfly-blob-mystery-weather-radar

…and this popped up today, right before I saw a Cosmopolitan (aka Painted Lady) this afternoon.