As the power of AI grows, we need to have evidence of its sentience. That is why we must return to the minds of animals.
~ Kristin Andrews and Jonathan Birch from, https://aeon.co/essays/to-understand-ai-sentience-first-understand-it-in-animals
This article ate my face. I was scrolling through a long list of things I’d marked for later reading, I glanced at the first paragraph of this article… and a half-hour later I realized it must be included here. I couldn’t even figure out what to pull-quote because that requires choosing the most-important theme. The article goes deeply into multiple intriguing topics, including sentience, evolution, pain, and artificial intelligence. I punted and just quoted the sub-title of the article.
The biggest new-to-me thing I encountered is a sublime concept called the gaming problem in assessing sentience. It’s about gaming, in the sense of “gaming the system of assessment.” If you’re clicking through to the article, just ignore me and go read…
…okay, still here? Here’s my explanation of the gaming problem:
Imagine you want to wonder if an octopus is sentient. You might then go off and perform polite experiments on octopods. You might then set about wondering what your experiments tell you. You might wonder if the octopods are intelligent enough to try to deceive you. (For example, if they are intelligent enough, they might realize you’re a scientist studying them, and that convincing you they are sentient and kind, would be in their best interest.) But you definitely do not need to wonder if the octopods have studied all of human history to figure out how to deceive you—they definitely have not because living in water they have no access to our stored knowledge. Therefore, when studying octopods, you do not have to worry about them using knowledge of humans to game your system of study.
Now, imagine you want to wonder if an AI is sentient. You might wonder will the AI try to deceive you into thinking it’s sentient when it actually isn’t. We know that we humans deceive each other often; We write about it a lot, and our deception is seen in every other form of media too. Any AI created by humans will have access to a lot (most? all??) of human knowledge and would therefore certainly have access to plenty of information about how to deceive a person, what works, and what doesn’t. So why would an AI not game your system of study to convince you it is sentient?
That’s the gaming problem in assessing sentience.
ɕ