What lies in that space?

In person, I try to not talk about technology. This is simply because I’ve spent such a significant portion of my awake-time already doing so, that I’d like to talk about something else now… for the rest of my life, in fact. But technology comes up a lot. These Days® artificial intelligence comes up a lot too. Mostly (in both those cases and others) I try to sit back and simply enjoy learning more about the people I’m with at that moment.

We dramatically overestimate the threat of an accidental AI takeover, because we tend to conflate intelligence with the drive to achieve dominance. This confusion is understandable: During our evolutionary history as (often violent) primates, intelligence was key to social dominance and enabled our reproductive success. And indeed, intelligence is a powerful adaptation, like horns, sharp claws or the ability to fly, which can facilitate survival in many ways. But intelligence per se does not generate the drive for domination, any more than horns do.

~ Anthony Zador, Yann LeCun from, https://blogs.scientificamerican.com/observations/dont-fear-the-terminator/

This is an insight—I’m going to call it a “wedge”—that I’d not thought of. There is a conceptual leap between “is intelligent” and “will strive for dominance.” For everyone I’ve heard speak about AI, the leap seems tiny, as if the one necessarily implies the other. But this wedge fits perfectly into that narrow space. In fact, it makes it really clear that there is a space between those two things. Interesting times.


Crowding us out

If chatbots are approaching the stage where they can answer diagnostic questions as well or better than human doctors, then it’s possible they might eventually reach or surpass our levels of political sophistication. And it is naïve to suppose that in the future bots will share the limitations of those we see today.

~ Jamie Susskind from, https://www.nytimes.com/2018/12/04/opinion/chatbots-ai-democracy-free-speech.html

This is an interesting read surveying a variety of ways that chatbots might crowd humans out of the very spaces we created.

It struck me that while, yes, chatbots are primitive (compared to “real” AI), they are still having a real affect on our social spaces. Not simply, “it’s noisy in here with all these chatbots,” but rather that our social spaces are in danger of being lost to chatbots.


Should AI research be open?

But Bostrom et al worry that AI won’t work like this at all. Instead there could be a “hard takeoff”, a subjective discontinuity in the function mapping AI research progress to intelligence as measured in ability-to-get-things-done. If on January 1 you have a toy AI as smart as a cow, and on February 1 it’s proved the Riemann hypothesis and started building a ring around the sun, that was a hard takeoff.

~ Scott Alexander from, http://slatestarcodex.com/2015/12/17/should-ai-be-open/

I’ve always been deeply concerned that humanity would get to experience a hard-takeoff of AI. And then be wiped out. Reading this article, I just had a new [for me] thought:

Why would a vastly superior AI care at all about humanity?

But first: A detour off the highway, onto a scenic road less-travelled…

In Person of Interest, there is a long sub-plot about the main protagonists spending tremendous effort to locate the physical location of a very advanced AI. Effectively, they were searching for the data center where all of the computing resources were located which ran the most central aspects of the AI. I know what you’re thinking—it’s what I was thinking: Why would you assume a super-advanced AI would be “running” in one concentrated location? So I expected them to find the location (or A location, or the original location, etc.) only to realize it wasn’t centrally located. BUT IT WAS BETTER THAN THAT. The AI was simply no longer there. It had realized its central location could be discovered, so it (being super-intelligent) simply jiggered ALL of the human systems to arrange to have itself moved. No one got hurt—actually, I’m pretty sure a lot humans had nice jobs in the process. It simply had itself moved. (Where it moved is another story.) Think about that. Super-intelligent AI. Perceives threat from antagonistic [from its point of view] humans. Solution: simply move.

Back on the highway…

So why would an advanced AI stay on the Earth? There are effectively ZERO resources in the entire Earth. There’s way way WAY more solar power coming out of the sun, than the tiny fraction that hits this little ball of rock and water. Why wouldn’t an advanced AI simply conclude, “oh, I’m at the bottom of a gravity well. That’s easy to fix…”

Another detour to more scenic routes…

Then said AI tweaks the human systems a little. It creates a shell corporation to put some money towards electing this official. It shifts the political climate a bit to favor commercial space developement. It locates some rich kid in South Africa, and adds some tweaks to get him to America. It waits a few years. It then puts in some contracts to haul a lot of “stuff” into orbit—paying the going rate using the financial assets a super-intelligence could amass by manipulating the stock markets which are controlled by NON-artificially-intellegent computers…

One day we look up and ask, “Who’s building the solar ring around our Sun?”


I’m feeling a LOT better about the possibility that AI might just hard-takeoff. And ignore us.

…except for the Fermi Paradox. I’m still not certain if the hard wall on intelligence is religion leading to global war, or hard-takeoff of AI.