The tool was called Sudowrite. Designed by developers turned sci-fi authors Amit Gupta and James Yu, it’s one of many AI writing programs built on OpenAI’s language model GPT-3 that have launched since it was opened to developers last year. But where most of these tools are meant to write company emails and marketing copy, Sudowrite is designed for fiction writers.
~ Josh Dzieza from, https://www.theverge.com/c/23194235/ai-fiction-writing-amazon-kindle-sudowrite-jasper
Okay, fine, there have a pull-quote from an article about AI!
Today we have really amazing tools which are Large Language Models (LLMs). And today they have already changed the world. I’m not exaggerating. Today it’s possible to use LLMs to do astounding things. That’s awesome. But it’s not yet intelligence. 110% clarity here: All the stuff everyone is talking about today is freakin’ awesome.
I’m saying (I know it doesn’t matter what I say) we should save the term “Artificial Intelligence” for things which are actually intelligent. Words don’t inherently have meaning, but it’s vastly better if we don’t use “intelligence” to mean one thing when we talk about a person, and to mean something entirely different when we talk about today’s LLMs. Today’s LLMs are not [yet] intelligent.
Why this quibble today? Because when artificial intelligence appears, shit’s gonna get real. People who think a lot about AI want to talk about ensuring AI’s morals and goals are in reasonable alignment with humans’ (lest the AI end up misaligned and, perhaps, optimize for paperclip creation and wipe us out.)
My opinion: To be considered intelligent, one must demonstrate agency. Some amount of agency is necessary for something to be intelligent. Agency is not sufficient. Let’s start talking about AGENCY.
The tools we see today (LLMs so far) do not have agency. Contrast that with, say, elephants and dogs which do have agency. I believe the highest moral crimes involve taking someone’s (a word reserved for people) or something’s agency away. All the horrid crimes which we can imagine, each involve the victims’ loss of agency.
So what are we going to do when AIs appear? Prediction: We’re going to do what we humans have always done, historically to each other, elephants and dogs. As individuals we’re all over the moral map. Let’s start more conversations about agency before we have a new sort of intelligence that decides the issue and then explains the answer to us.
ɕ