What’s our five-paragraph essay?

The article I’m linking to below is about AI — wait don’t run away. It made me actually think about: What is the purpose of the five-paragraph essay? How does one write such a thing—what exactly am I doing when I go through the soul-sucking process of doing it?

So you don’t have to read the short, linked article… but I recommend it.

The five-paragraph essay is a mainstay of high school writing instruction, designed to teach students how to compose a simple thesis and defend it in a methodical, easily graded package. It’s literature analysis at its most basic, and most rigid, level.

~ Emma Camp, from Rethinking the 5-Paragraph Essay in the Age of AI

Back to the question in my title:

In podcasting, what is our five-paragraph essay?

The five-paragraph essay is a blunt tool, sure. But it is clearly one, important but small, piece of a large puzzle called “learning to write well.” You do it very early in “learning to write well” and then you leave it behind.

What is our five-paragraph essay?

Any one of the following could be our five-paragraph essay…

  • Write a terrific, single, “hook sentence” from a podcast episode.
  • Write a paragraph of 3-5 sentences from a podcast episode.
  • Find at least one quotable portion from a podcast episode.
  • Write a list of takeaways in a specific style and with specific formatting.

Why do I say those things? Because once I understood how to do them by hand, turning to tools like AI is not cheating. The AI does a solid B+ (ref article above) job, which I can then finish to my A-level.

ɕ

The chasm between

Too often there is a chasm between our ideas and knowledge on the one hand and our actual experience on the other. We absorb trivia and information that take up mental space but get us nowhere. We read books that divert us but have little relevance to our daily lives. We have lofty ideas that we do not put into practice. We also have many rich experiences that we do not analyze enough, that do not inspire us with ideas, whose lessons we ignore. Strategy requires a constant contact between the two realms.

~ Robert Greene

slip:4a1009.

Not so easy

The answer depends on whether he recognizes that though he may have subdued his external obstacles and enemies, he must overcome psychological foes — depression, anomie, angst — which are no less formidable for their ethereality. He must embrace the fact that though this world may be thoroughly charted, explored, and technologized, there remains one last territory to conquer — himself.

~ Brett McKay from, https://www.artofmanliness.com/character/advice/sunday-firesides-mans-last-great-conquest-himself/

slip:4uaoca17.

I would argue that all the external conquering and subduing was the easy part. That existential dread? That’s not so easy. The first part of solving that problem is of course realizing it is a problem for oneself. Yeah, I’m working on that.

ɕ

Silent majority

The great biographer Robert Caro once said, “Power doesn’t always corrupt, but power always reveals.” Perhaps the same is true of the most powerful networks in human history.

Social media has not corrupted us, it’s merely revealed who we always were.

~ Mark Manson from, https://markmanson.net/social-media-isnt-the-problem

slip:4umaso1.

There’s a lot of good—writing, concepts, anecdote, data—in this article. But the thing that leapt out at me was something I’d already known, but seem to have forgotten… or, if not fully forgotten, I’d failed to connect it to other things in my model of the world: The idea of the silent majority.

About 90% of the people participating on social networks, are not even participating. They’re simply observing. It turns out that the other 10% are the people with extreme views; not “blow stuff up” extreme, but simply more towards the opposing ends of whatever spectrum of views you care to consider.

Two things to consider: First, boy howdy guilty as charged! I’m on Facebook, Instagram and LinkedIn— but the only content I post is related to my projects. I don’t engage with anything, reshare… or even, really, participate unless it’s related to a project. *face palm* Woa! I’m literally a member of the silent majority. Perhaps you are to? If 10 of you are reading, then 9 of you are just like me.

Second, because math! If you look at the stream we all like to say, “it’s endless!” Right. There must be thousands of posts, right? I’ll pause while you do math… right. If there are only thousands of posts for me to see, I’m clearly not seeing all the activity from the millions of people. Sure, some of that is the platform filtering, but I have the feeling that the numbers hold true: If everyone posted a lot we’d have thousands of times more stuff flying around.

ɕ

Pseudo-Depth

The bottom line is that if you’re intrigued by depth, give real depth a try, by which I mean giving yourself at least two or three hours with zero distractions. Let the hard task sink in and marinate. Push through the initial barrier of boredom and get to a point where your brain can do what it’s probably increasingly craving in our distracted world: to think deeply.

~ Cal Newport from, https://www.calnewport.com/blog/2015/12/12/deep-habits-the-danger-of-pseudo-depth/

slip:4ucabo35.

My mind loves to wander off. It often wanders off to familiar ideas. Ever have a small burr on a finger nail? You fiddle with it slightly, scuffing it with another nail. Some thoughts feel like that in my mind. Not a problem exactly—not bad enough that I’m going to get up for the nail file. But, none the less, there is this idea yet again. My fascination with rock climbing is one such idea. Why, exactly, does climbing fascinate me? I’ve spend many a CPU cycle recursively interrogating this question.

Upon reading Newport’s post, I find it has pointed me in a direction I’d not previously seen: Is it the deep focus found within the pursuit of rock climbing which draws me to it?

ɕ

Should AI research be open?

But Bostrom et al worry that AI won’t work like this at all. Instead there could be a “hard takeoff”, a subjective discontinuity in the function mapping AI research progress to intelligence as measured in ability-to-get-things-done. If on January 1 you have a toy AI as smart as a cow, and on February 1 it’s proved the Riemann hypothesis and started building a ring around the sun, that was a hard takeoff.

~ Scott Alexander from, http://slatestarcodex.com/2015/12/17/should-ai-be-open/

slip:4usaso2.

I’ve always been deeply concerned that humanity would get to experience a hard-takeoff of AI. And then be wiped out. Reading this article, I just had a new [for me] thought:

Why would a vastly superior AI care at all about humanity?

But first: A detour off the highway, onto a scenic road less-travelled…

In Person of Interest, there is a long sub-plot about the main protagonists spending tremendous effort to locate the physical location of a very advanced AI. Effectively, they were searching for the data center where all of the computing resources were located which ran the most central aspects of the AI. I know what you’re thinking—it’s what I was thinking: Why would you assume a super-advanced AI would be “running” in one concentrated location? So I expected them to find the location (or A location, or the original location, etc.) only to realize it wasn’t centrally located. BUT IT WAS BETTER THAN THAT. The AI was simply no longer there. It had realized its central location could be discovered, so it (being super-intelligent) simply jiggered ALL of the human systems to arrange to have itself moved. No one got hurt—actually, I’m pretty sure a lot humans had nice jobs in the process. It simply had itself moved. (Where it moved is another story.) Think about that. Super-intelligent AI. Perceives threat from antagonistic [from its point of view] humans. Solution: simply move.

Back on the highway…

So why would an advanced AI stay on the Earth? There are effectively ZERO resources in the entire Earth. There’s way way WAY more solar power coming out of the sun, than the tiny fraction that hits this little ball of rock and water. Why wouldn’t an advanced AI simply conclude, “oh, I’m at the bottom of a gravity well. That’s easy to fix…”

Another detour to more scenic routes…

Then said AI tweaks the human systems a little. It creates a shell corporation to put some money towards electing this official. It shifts the political climate a bit to favor commercial space developement. It locates some rich kid in South Africa, and adds some tweaks to get him to America. It waits a few years. It then puts in some contracts to haul a lot of “stuff” into orbit—paying the going rate using the financial assets a super-intelligence could amass by manipulating the stock markets which are controlled by NON-artificially-intellegent computers…

One day we look up and ask, “Who’s building the solar ring around our Sun?”

Actually.

I’m feeling a LOT better about the possibility that AI might just hard-takeoff. And ignore us.

…except for the Fermi Paradox. I’m still not certain if the hard wall on intelligence is religion leading to global war, or hard-takeoff of AI.

ɕ