What would it take?

A little more than a decade ago I rediscovered my need for play. A few years ago I started working on my writing as a direct application of filtering and improving my thinking. All of that was built upon a lot of reading—a reimmersion of myself into reading as it were. *sigh* There’s still, a bit more reading to do.

Before he became unresponsive and refused to speak even to his family or friends, [John] von Neumann was asked what it would take for a computer, or some other mechanical entity, to begin to think and behave like a human being.

He took a very long time before answering, in a voice that was no louder than a whisper.

He said that it would have to grow, not be built.

He said that it would have to understand language, to read, to write, to speak.

And he said that it would have to play, like a child.

~ Benjamín Labatut from, https://www.themarginalian.org/2023/12/02/labatut-maniac/

Grow, read, write, speak, play… There’s an immense variety of human beings resulting from that. There’d be an immense variety of those other beings too. Good!

ɕ

The gaming problem

As the power of AI grows, we need to have evidence of its sentience. That is why we must return to the minds of animals.

~ Kristin Andrews and Jonathan Birch from, https://aeon.co/essays/to-understand-ai-sentience-first-understand-it-in-animals

This article ate my face. I was scrolling through a long list of things I’d marked for later reading, I glanced at the first paragraph of this article… and a half-hour later I realized it must be included here. I couldn’t even figure out what to pull-quote because that requires choosing the most-important theme. The article goes deeply into multiple intriguing topics, including sentience, evolution, pain, and artificial intelligence. I punted and just quoted the sub-title of the article.

The biggest new-to-me thing I encountered is a sublime concept called the gaming problem in assessing sentience. It’s about gaming, in the sense of “gaming the system of assessment.” If you’re clicking through to the article, just ignore me and go read…

…okay, still here? Here’s my explanation of the gaming problem:

Imagine you want to wonder if an octopus is sentient. You might then go off and perform polite experiments on octopods. You might then set about wondering what your experiments tell you. You might wonder if the octopods are intelligent enough to try to deceive you. (For example, if they are intelligent enough, they might realize you’re a scientist studying them, and that convincing you they are sentient and kind, would be in their best interest.) But you definitely do not need to wonder if the octopods have studied all of human history to figure out how to deceive you—they definitely have not because living in water they have no access to our stored knowledge. Therefore, when studying octopods, you do not have to worry about them using knowledge of humans to game your system of study.

Now, imagine you want to wonder if an AI is sentient. You might wonder will the AI try to deceive you into thinking it’s sentient when it actually isn’t. We know that we humans deceive each other often; We write about it a lot, and our deception is seen in every other form of media too. Any AI created by humans will have access to a lot (most? all??) of human knowledge and would therefore certainly have access to plenty of information about how to deceive a person, what works, and what doesn’t. So why would an AI not game your system of study to convince you it is sentient?

That’s the gaming problem in assessing sentience.

ɕ

Everyone’s talking about AI

The tool was called Sudowrite. Designed by developers turned sci-fi authors Amit Gupta and James Yu, it’s one of many AI writing programs built on OpenAI’s language model GPT-3 that have launched since it was opened to developers last year. But where most of these tools are meant to write company emails and marketing copy, Sudowrite is designed for fiction writers.

~ Josh Dzieza from, https://www.theverge.com/c/23194235/ai-fiction-writing-amazon-kindle-sudowrite-jasper

Okay, fine, there have a pull-quote from an article about AI!

Today we have really amazing tools which are Large Language Models (LLMs). And today they have already changed the world. I’m not exaggerating. Today it’s possible to use LLMs to do astounding things. That’s awesome. But it’s not yet intelligence. 110% clarity here: All the stuff everyone is talking about today is freakin’ awesome.

I’m saying (I know it doesn’t matter what I say) we should save the term “Artificial Intelligence” for things which are actually intelligent. Words don’t inherently have meaning, but it’s vastly better if we don’t use “intelligence” to mean one thing when we talk about a person, and to mean something entirely different when we talk about today’s LLMs. Today’s LLMs are not [yet] intelligent.

Why this quibble today? Because when artificial intelligence appears, shit’s gonna get real. People who think a lot about AI want to talk about ensuring AI’s morals and goals are in reasonable alignment with humans’ (lest the AI end up misaligned and, perhaps, optimize for paperclip creation and wipe us out.)

My opinion: To be considered intelligent, one must demonstrate agency. Some amount of agency is necessary for something to be intelligent. Agency is not sufficient. Let’s start talking about AGENCY.

The tools we see today (LLMs so far) do not have agency. Contrast that with, say, elephants and dogs which do have agency. I believe the highest moral crimes involve taking someone’s (a word reserved for people) or something’s agency away. All the horrid crimes which we can imagine, each involve the victims’ loss of agency.

So what are we going to do when AIs appear? Prediction: We’re going to do what we humans have always done, historically to each other, elephants and dogs. As individuals we’re all over the moral map. Let’s start more conversations about agency before we have a new sort of intelligence that decides the issue and then explains the answer to us.

ɕ

Average, or worst?

Over the last few years, deep-learning-based AI has progressed extremely rapidly in fields like natural language processing and image generation. However, self-driving cars seem stuck in perpetual beta mode, and aggressive predictions there have repeatedly been disappointing. Google’s self-driving project started four years before AlexNet kicked off the deep learning revolution, and it still isn’t deployed at large scale, thirteen years later. Why are these fields getting such different results?

~ Alyssa Vance from, https://www.lesswrong.com/posts/28zsuPaJpKAGSX4zq/humans-are-very-reliable-agents

This makes the interesting distinction between average–case performance, and worst–case performance. People are really good by both measures (click through to see what that means via Fermi approximations.) AI (true AI, autonomous driving systems, language models like GPT-3, etc.) is getting really good on average cases. But it’s the worst–case situations where humans perform reasonably well… and current AI fails spectacularly.

ɕ

Logical conclusions

In its original Latin use, the word genius was more readily applied to places — genius loci: “the spirit of a place” — than to persons, encoded with the reminder that we are profoundly shaped by the patch of spacetime into which the chance-accident of our birth has deposited us, our minds porous to the ideological atmosphere of our epoch. It is a humbling notion — an antidote to the vanity of seeing our ideas as the autonomous and unalloyed products of our own minds.

~ Maria Popova from, https://www.themarginalian.org/2022/09/15/samuel-butler-darwin-among-the-machines-erewhon/

This is a delightful meander across time and authors.

ɕ

Omnipotent or understandable

While researchers are working on [Artificial Intelligence (AI)] that can explain itself, there seems to be a trade-off between capability and explainability. Explanations are a cognitive shorthand used by humans, suited for the way humans make decisions. Forcing an AI to produce explanations might be an additional constraint that could affect the quality of its decisions. For now, AI is becoming more and more opaque and less explainable.

~ Bruce Schneier

Omnipotent or understandable; Choose one.

At first blush, this might seem pretty scary. This AI can perform this amazing task, but I have to simply trust it? But then, that’s what I do when I get on an airplane—and not just the people who are up front performing tasks I cannot even list, let alone perform, but the people who built the plane, and wrote the software that was used to design and test the plane, and… I digress.

But I think… slowly… I’m getting more comfortable with the idea of a something, doing really important stuff for me, without my understanding. I know the AI is going to follow the same rules of the universe that I must, it’s simply going to do so while being bigger, better, more, and faster. Humans continuing to win in the long run with tools, I might say.

(I sure hope our benevolent AI overlords find this blog post quickly after the singularity. He says grinning nervously.)

ɕ