Courage, part 1

Life shrinks or expands in proportion to one’s courage.

~ Anais Nin

Constant optimization

http://www.mrmoneymustache.com/2013/05/15/the-principle-of-constant-optimization/

An unexpected benefit of all this self-imposed change is that it helps protect you from forming bad habits, which are hard to change once you get them. In fact, change itself becomes the habit, which is a good one to carry with you through your life. The willingness to experience change brings opportunity, wealth, learning, and happiness for most of us who embrace it.

Wait. Is he saying there are people who don’t optimize? #watisthisidonteven

ɕ

How bad are things?

http://slatestarcodex.com/2015/12/24/how-bad-are-things/

I think about all of the miserable people in my psychiatric clinic. Then I multiply by ten psychiatrists in my clinic. Then I multiply by ten similarly-sized clinics in my city. Then I multiply by a thousand such cities in the United States. Then I multiply by hundreds of countries in the world, and by that time my brain has mercifully stopped being able to visualize what that signifies.

~ Scott Alexander

The really interesting part of the article is where he whipped up a random “person” generator and fed it the best-estimate percentages of various problems. (Chance of drug addiction, chance of certain psychosis, etc.) He then generated a bunch of random people and, as is to be expected when the percentage chance for problems is low, he got a significant number of people who are “no problems.”

…and then he sketches (from his own direct experience) several types—not specific examples, but a type of person whom he sees many examples of—who fit into the “no problems” bucket of the “random person generator.” The take-away is that, yes, things are VERY bad.

ɕ

Steve is not an aurora

https://www.universetoday.com/139809/that-new-kind-of-aurora-called-steve-turns-out-it-isnt-an-aurora-at-all/

This means that STEVE is not likely to be caused by the same mechanism as an aurora, and is therefore an entirely new type of optical phenomenon – which the team refer to as “skyglow”.

~ Matt Williams

Aside: “Steve” started as in-joke reference by some dedicated Aurora photographers. It was later backronymed.

I particular love this type of discovery. Looking at the shape of the visual phenomenon—it’s a straight-ish thin streak—I bet this is realated to certain types of mythologies and stories…

ɕ

An open door policy

http://slatestarcodex.com/2018/08/23/carbon-dioxide-an-open-door-policy/

Aware of this research, my housemates tested their air quality and got levels between 1000 and 3000 ppm, around the level of the worst high-CO2 conditions in the studies. They started leaving their windows open and buying industrial quantities of succulent plants, and the problems mostly disappeared. Since then they’ve spread the word to other people we know afflicted with mysterious fatigue, some of whom have also noticed positive results.

~ Scott Alexander

I thought this was going to be an article about fossil fuels and global warming. No it’s much worse. It’s about how some people have measured levels of CO2 in their bedroom that exceed the OSHA workplace safe-exposure limits.

Now i’m wondering if one of the reasons I sleep better in the winter, is the difference in ventilation. Our A/C is a closed system—it only circulates the air in the house. But the wood stove lowers the air pressure slightly and that draws outside air in from the peripheral areas of the house. Tiny cool drafts come out of all the wall outlets and light switches in the winter providing fresh air ventiliation.

ɕ

Discipline

If you are going to achieve excellence in big things, you develop the habit in little matters. Excellence is not an exception it is a prevailing attitude.

~ Gen. Colin Powell

Meadows as endless as the desert

https://www.brainpickings.org/2018/08/22/van-gogh-sorrow/

You know the landscape there, superb trees full of majesty and serenity beside green, dreadful, toy-box summer-houses, and every absurdity the lumbering imagination of Hollanders with private incomes can come up with in the way of flower-beds, arbours, verandas. Most of the houses very ugly, but some old and elegant. Well, at that moment, high above the meadows as endless as the desert, came one driven mass of cloud after the other, and the wind first struck the row of country houses with their trees on the opposite side of the waterway, where the black cinder road runs. Those trees, they were superb, there was a drama in each figure I’m tempted to say, but I mean in each tree.

Sometimes, a bit of writing simply must be shared.

ɕ

How to be mindful

https://zenhabits.net/always/

Slowly add mindfulness bells. A mindfulness bell can be anything in your environment. Thich Nhat Hanh suggested using traffic lights as a mindfulness bell — when you see one, instead of getting caught up in the stress of driving, allow yourself to become present. You can slowly find other mindfulness bells — your daughter’s face, opening your computer, having your first cup of coffee, hearing a train going by.

Finding ways to trigger making conscious decisions is the key to increasing the amount of time you are mindful. The possibilities are endless!

ɕ

Should AI research be open?

http://slatestarcodex.com/2015/12/17/should-ai-be-open/

But Bostrom et al worry that AI won’t work like this at all. Instead there could be a “hard takeoff”, a subjective discontinuity in the function mapping AI research progress to intelligence as measured in ability-to-get-things-done. If on January 1 you have a toy AI as smart as a cow, and on February 1 it’s proved the Riemann hypothesis and started building a ring around the sun, that was a hard takeoff.

~ Scott Alexander

I’ve always been deeply concerned that humanity would get to experience a hard-takeoff of AI. And then be wiped out. Reading this article, I just had a new [for me] thought:

Why would a vastly superior AI care at all about humanity?

But first: A detour off the highway, onto a scenic road less-travelled…

In Person of Interest, there is a long sub-plot about the main protagonists spending tremendous effort to locate the physical location of a very advanced AI. Effectively, they were searching for the data center where all of the computing resources were located which ran the most central aspects of the AI. I know what you’re thinking—it’s what I was thinking: Why would you assume a super-advanced AI would be “running” in one concentrated location? So I expected them to find the location (or A location, or the original location, etc.) only to realize it wasn’t centrally located. BUT IT WAS BETTER THAN THAT. The AI was simply no longer there. It had realized its central location could be discovered, so it (being super-intelligent) simply jiggered ALL of the human systems to arrange to have itself moved. No one got hurt—actually, I’m pretty sure a lot humans had nice jobs in the process. It simply had itself moved. (Where it moved is another story.) Think about that. Super-intelligent AI. Perceives threat from antagonistic [from its point of view] humans. Solution: simply move.

Back on the highway…

So why would an advanced AI stay on the Earth? There are effectively ZERO resources in the entire Earth. There’s way way WAY more solar power coming out of the sun, than the tiny fraction that hits this little ball of rock and water. Why wouldn’t an advanced AI simply conclude, “oh, I’m at the bottom of a gravity well. That’s easy to fix…”

Another detour to more scenic routes…

Then said AI tweaks the human systems a little. It creates a shell corporation to put some money towards electing this official. It shifts the political climate a bit to favor commercial space developement. It locates some rich kid in South Africa, and adds some tweaks to get him to America. It waits a few years. It then puts in some contracts to haul a lot of “stuff” into orbit—paying the going rate using the financial assets a super-intelligence could amass by manipulating the stock markets which are controlled by NON-artificially-intellegent computers…

One day we look up and ask, “Who’s building the solar ring around our Sun?”

Actually.

I’m feeling a LOT better about the possibility that AI might just hard-takeoff. And ignore us.

…except for the Fermi Paradox. I’m still not certain if the hard wall on intelligence is religion leading to global war, or hard-takeoff of AI.

ɕ