Nature’s dominant creature

A further unpleasant fact of life: biologists have discovered that the more complex a life form is, the quicker it goes extinct. That hapless cream-puff of the animal kingdom, the jellyfish, rather uncomplicated in form and function, has been around for 500 million years and counting. The average kick at the can, for a complex species, lasts four million years, which happens to be about how long we’ve been around.

~ David Cain from, http://www.raptitude.com/2009/12/natures-dominant-creature/

slip:4urana3.

This is such a wonderful kick in the complacency.

It’s taken me so much effort just to wrap my brain around the reality of the place of a human life [my life!] in the scale of things. In that effort, one thing I was tempted to fall back on was the crutch that at least a human life is part of the Grand Arc of Human History. Meanwhile, we still appear to be alone in the universe, day by day adding weight to the idea that there’s some sort of hard wall faced by intelligence during its evolution.

ɕ

Should AI research be open?

But Bostrom et al worry that AI won’t work like this at all. Instead there could be a “hard takeoff”, a subjective discontinuity in the function mapping AI research progress to intelligence as measured in ability-to-get-things-done. If on January 1 you have a toy AI as smart as a cow, and on February 1 it’s proved the Riemann hypothesis and started building a ring around the sun, that was a hard takeoff.

~ Scott Alexander from, http://slatestarcodex.com/2015/12/17/should-ai-be-open/

slip:4usaso2.

I’ve always been deeply concerned that humanity would get to experience a hard-takeoff of AI. And then be wiped out. Reading this article, I just had a new [for me] thought:

Why would a vastly superior AI care at all about humanity?

But first: A detour off the highway, onto a scenic road less-travelled…

In Person of Interest, there is a long sub-plot about the main protagonists spending tremendous effort to locate the physical location of a very advanced AI. Effectively, they were searching for the data center where all of the computing resources were located which ran the most central aspects of the AI. I know what you’re thinking—it’s what I was thinking: Why would you assume a super-advanced AI would be “running” in one concentrated location? So I expected them to find the location (or A location, or the original location, etc.) only to realize it wasn’t centrally located. BUT IT WAS BETTER THAN THAT. The AI was simply no longer there. It had realized its central location could be discovered, so it (being super-intelligent) simply jiggered ALL of the human systems to arrange to have itself moved. No one got hurt—actually, I’m pretty sure a lot humans had nice jobs in the process. It simply had itself moved. (Where it moved is another story.) Think about that. Super-intelligent AI. Perceives threat from antagonistic [from its point of view] humans. Solution: simply move.

Back on the highway…

So why would an advanced AI stay on the Earth? There are effectively ZERO resources in the entire Earth. There’s way way WAY more solar power coming out of the sun, than the tiny fraction that hits this little ball of rock and water. Why wouldn’t an advanced AI simply conclude, “oh, I’m at the bottom of a gravity well. That’s easy to fix…”

Another detour to more scenic routes…

Then said AI tweaks the human systems a little. It creates a shell corporation to put some money towards electing this official. It shifts the political climate a bit to favor commercial space developement. It locates some rich kid in South Africa, and adds some tweaks to get him to America. It waits a few years. It then puts in some contracts to haul a lot of “stuff” into orbit—paying the going rate using the financial assets a super-intelligence could amass by manipulating the stock markets which are controlled by NON-artificially-intellegent computers…

One day we look up and ask, “Who’s building the solar ring around our Sun?”

Actually.

I’m feeling a LOT better about the possibility that AI might just hard-takeoff. And ignore us.

…except for the Fermi Paradox. I’m still not certain if the hard wall on intelligence is religion leading to global war, or hard-takeoff of AI.

ɕ