Preschool

http://slatestarcodex.com/2018/11/06/preschool-i-was-wrong/

I talked to Kelsey about some of the research for her article, and independently came to the same conclusion: despite the earlier studies of achievement being accurate, preschools (including the much-maligned Head Start) do seem to help children in subtler ways that only show up years later. Children who have been to preschool seem to stay in school longer, get better jobs, commit less crime, and require less welfare. The thing most of the early studies were looking for – academic ability – is one of the only things it doesn’t affect.

~ Scott Alexander

Presented without comment. Except of course for this comment where I confess that—for the umpteenth time—I’ve read something written by Scott Alexander and had my mind broadened (in a good way.)

ɕ

How bad are things?

http://slatestarcodex.com/2015/12/24/how-bad-are-things/

I think about all of the miserable people in my psychiatric clinic. Then I multiply by ten psychiatrists in my clinic. Then I multiply by ten similarly-sized clinics in my city. Then I multiply by a thousand such cities in the United States. Then I multiply by hundreds of countries in the world, and by that time my brain has mercifully stopped being able to visualize what that signifies.

~ Scott Alexander

The really interesting part of the article is where he whipped up a random “person” generator and fed it the best-estimate percentages of various problems. (Chance of drug addiction, chance of certain psychosis, etc.) He then generated a bunch of random people and, as is to be expected when the percentage chance for problems is low, he got a significant number of people who are “no problems.”

…and then he sketches (from his own direct experience) several types—not specific examples, but a type of person whom he sees many examples of—who fit into the “no problems” bucket of the “random person generator.” The take-away is that, yes, things are VERY bad.

ɕ

An open door policy

http://slatestarcodex.com/2018/08/23/carbon-dioxide-an-open-door-policy/

Aware of this research, my housemates tested their air quality and got levels between 1000 and 3000 ppm, around the level of the worst high-CO2 conditions in the studies. They started leaving their windows open and buying industrial quantities of succulent plants, and the problems mostly disappeared. Since then they’ve spread the word to other people we know afflicted with mysterious fatigue, some of whom have also noticed positive results.

~ Scott Alexander

I thought this was going to be an article about fossil fuels and global warming. No it’s much worse. It’s about how some people have measured levels of CO2 in their bedroom that exceed the OSHA workplace safe-exposure limits.

Now i’m wondering if one of the reasons I sleep better in the winter, is the difference in ventilation. Our A/C is a closed system—it only circulates the air in the house. But the wood stove lowers the air pressure slightly and that draws outside air in from the peripheral areas of the house. Tiny cool drafts come out of all the wall outlets and light switches in the winter providing fresh air ventiliation.

ɕ

Should AI research be open?

http://slatestarcodex.com/2015/12/17/should-ai-be-open/

But Bostrom et al worry that AI won’t work like this at all. Instead there could be a “hard takeoff”, a subjective discontinuity in the function mapping AI research progress to intelligence as measured in ability-to-get-things-done. If on January 1 you have a toy AI as smart as a cow, and on February 1 it’s proved the Riemann hypothesis and started building a ring around the sun, that was a hard takeoff.

~ Scott Alexander

I’ve always been deeply concerned that humanity would get to experience a hard-takeoff of AI. And then be wiped out. Reading this article, I just had a new [for me] thought:

Why would a vastly superior AI care at all about humanity?

But first: A detour off the highway, onto a scenic road less-travelled…

In Person of Interest, there is a long sub-plot about the main protagonists spending tremendous effort to locate the physical location of a very advanced AI. Effectively, they were searching for the data center where all of the computing resources were located which ran the most central aspects of the AI. I know what you’re thinking—it’s what I was thinking: Why would you assume a super-advanced AI would be “running” in one concentrated location? So I expected them to find the location (or A location, or the original location, etc.) only to realize it wasn’t centrally located. BUT IT WAS BETTER THAN THAT. The AI was simply no longer there. It had realized its central location could be discovered, so it (being super-intelligent) simply jiggered ALL of the human systems to arrange to have itself moved. No one got hurt—actually, I’m pretty sure a lot humans had nice jobs in the process. It simply had itself moved. (Where it moved is another story.) Think about that. Super-intelligent AI. Perceives threat from antagonistic [from its point of view] humans. Solution: simply move.

Back on the highway…

So why would an advanced AI stay on the Earth? There are effectively ZERO resources in the entire Earth. There’s way way WAY more solar power coming out of the sun, than the tiny fraction that hits this little ball of rock and water. Why wouldn’t an advanced AI simply conclude, “oh, I’m at the bottom of a gravity well. That’s easy to fix…”

Another detour to more scenic routes…

Then said AI tweaks the human systems a little. It creates a shell corporation to put some money towards electing this official. It shifts the political climate a bit to favor commercial space developement. It locates some rich kid in South Africa, and adds some tweaks to get him to America. It waits a few years. It then puts in some contracts to haul a lot of “stuff” into orbit—paying the going rate using the financial assets a super-intelligence could amass by manipulating the stock markets which are controlled by NON-artificially-intellegent computers…

One day we look up and ask, “Who’s building the solar ring around our Sun?”

Actually.

I’m feeling a LOT better about the possibility that AI might just hard-takeoff. And ignore us.

…except for the Fermi Paradox. I’m still not certain if the hard wall on intelligence is religion leading to global war, or hard-takeoff of AI.

ɕ

Many a milestone I missed

http://slatestarcodex.com/2015/11/03/what-developmental-milestones-are-you-missing/

This raises the obvious question of whether there are any basic mental operations I still don’t have, how I would recognize them if there were, and how I would learn them once I recognized them.

~ Scott Alexander

I truly don’t understand how he does this. This is so bootstrap-meta, I’m just left staring at it like a chicken stares right before pecking idiotically at a pebble.

ɕ

Tulip subsidies

http://slatestarcodex.com/2015/06/06/against-tulip-subsidies/

But the solution isn’t universal tulip subsidies. Higher education is in a bubble much like the old tulip bubble. In the past forty years, the price of college has dectupled (quadrupled when adjusting for inflation). It used to be easy to pay for college with a summer job; now it is impossible. At the same time, the unemployment rate of people without college degrees is twice that of people who have them. Things are clearly very bad and Senator Sanders is right to be concerned.

~ Scott Alexander

Don’t be distracted by the Sanders reference. This article stands just fine three years later.

It raises what I think is a really good idea: What if employers were NOT allowed to ask about post-primary education degrees? So just as an employer cannot judge you based on your skin color, they could not judge you based on some letters and a school name. INSTEAD, they would have to judge you based on your ability. Suddenly, the only value in those letters and the school name [to the potential student] would be the TRUE VALUE AND QUALITY of the education. At the same time, anyone who can match that quality of skill/knowledge — regardless where they got it — would be equally considered.

Make. This. Happen.

ɕ

Vast willpower is, well, not one of my powers

http://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/

I dunno. But I don’t think of myself as working hard at any of the things I am good at, in the sense of “exerting vast willpower to force myself kicking and screaming to do them”. It’s possible I do work hard, and that an outside observer would accuse me of eliding how hard I work, but it’s not a conscious elision and I don’t feel that way from the inside.

~ Scott Alexander

True story:

Long ago, I worked with a boy who was dating a girl. Boy goes to girl’s house for a dinner with her parents. Turns out that the girl’s father is a professor at College. The boy mentions he has a co-worker who went to that College, and mentions my name. Girl’s father says, “Oh! Craig was one of my students… He could have done well if he had applied himself.” Turns out father was one of the professors in my major. I had many classes with him, and he went on to be Department Head for a while. So he did, in fact, know me well.

I didn’t do the bare minimum. But to be fair to that professor, I didn’t really work super-hard either.

It was all, more or less, easy.

What would have been hard, would have been being in the Arts college and trying to do art-type-things. Hell, I would NEVER have even gotten accepted into the Arts college at that same university.

What was hard for me? I took a literature survey class once — ONCE. I took a journalism course… that was so hard I think I hallucinated most of it(*). I spent years trying to learn to play the piano, and the guitar– fail. And, I’m out of superlatives, but losing fat is really hard for me. And, controlling my disfunctional relationship with food is really REALLY hard. Also, languages are hard — I’ve been trying to stuff French into my head for 5 years now…

So:

That thing you’re doing that you find easy? …I’m — or someone else, you get the point — thinking, “HOW DO YOU DO THAT?!”

(*) On the other hand, it was the only course my now-wife and I were ever in together, so while I worked very hard, I was probably a little distracted.

ɕ

What the phatic?!

http://slatestarcodex.com/2015/01/11/the-phatic-and-the-anti-inductive/

Douglas Adams once said there was a theory that if anyone ever understood the Universe, it would disappear and be replaced by something even more incomprehensible. He added that there was another theory that this had already happened. These sorts of things – things such that if you understand them, they get more complicated until you don’t – are called “anti-inductive”.

~ Scott Alexander

A couple decades ago — I still say I have mild Asperger’s syndrome — I would have said, “I do not understand small talk. Stop jaw’in and transmit some useful information.” S-l-o-w-l-y, as I learned how to listen, I’ve come around to the view that there are many useful layers of communication. So, new word for 2018 (for me anyway): phatic.

ɕ