I’m backing away slowly

Inside this box is a thing of beauty—and absurdity. It’s a one-of-a-kind puzzle created just for me by one of the greatest puzzle makers in the world. It is, almost surely, the hardest puzzle ever to exist. But before I open the box, let me tell you how the puzzle came to be, and why I think it’s not a trivial pursuit.

~ A. J. Jacobs from, https://www.theatlantic.com/technology/archive/2022/04/puzzle-will-outlast-world/629651/

There was a time… who am I kidding? The time is now. Must. Resist. The urge. To buy…

ɕ

That’s… interesting

But in our Physics Project we’ve developed a fundamentally different view of space—in which space is not just a background, but has its own elaborate composition and structure. And in fact, we posit that space is in a sense everything that exists, and that all “things” are ultimately just features of the structure of space. We imagine that at the lowest level, space consists of large numbers of abstract “atoms of space” connected in a hypergraph that’s continually getting updated according to definite rules and that’s a huge version of something like this…

~ Stephen Wolfram from, https://writings.stephenwolfram.com/2022/03/on-the-concept-of-motion/

I’m not sure what to say about this. I am certain that Wolfram is not crazy and that he is brilliant, but he’s pretty far beyond what I can understand. (Picture me doing that slightly askew, squinting thing.) On the other hand, if they really are making the progress they seem to be… it’s going to be a neat time to be alive, in another decade when they get things sorted out.

ɕ

Variance

However—fourth—over the last century there’s a huge relationship between how rich a country is and the variance in growth. The richest countries have low variance: They all stubbornly keep growing at around the same 1 or 2%. However, middle-income countries vary enormously.

~ “Dynomight” from, https://dynomight.net/gdp/

There’s several different interesting threads in this article. But this point about variance leapt out at me. I’m reminded of how just the other day, a piece about statistics that I mentioned was talking about variance (if you clicked through and read the article.) Variance feels like a sort of second-order thinking that I probably should be doing more often.

ɕ

Tasty numbers

For five years as a data analyst, I forecasted and analyzed Google’s revenue. For six years as a data visualization specialist, I’ve helped clients and colleagues discover new features of the data they know best. Time and time again, I’ve found that by being more specific about what’s important to us and embracing the complexity in our data, we can discover new features in that data. These features can lead us to ask better data-driven questions that change how we analyze our data, the parameters we choose for our models, our scientific processes, or our business strategies.

~ Zan Armstrong from, https://stackoverflow.blog/2022/03/03/stop-aggregating-away-the-signal-in-your-data/

This one just has neat graphs in it. And it has some interesting insights about what data analysts do. The phrase “big data” has been tossed around a lot in recent years—the way “quantum mechanics” gets tossed around by people who have no idea about that either. This article isn’t about truly big data sets, but it’s a neat dive into energy usage as an example of some spiffy data analysis.

ɕ

It’s subtle but critically important

It’s broadly agreed these days that consciousness poses a very serious challenge for contemporary science. What I’m trying to work out at the moment is why science has such difficulty with consciousness. We can trace this problem back to its root, at the start of the scientific revolution.

~ Philip Goff from, https://www.edge.org/conversation/philip_goff-a-post-galilean-paradigm

I once had a mathematics professor make a comment that it’s fascinating that mathematics is able to explain reality. I double-clutched at the time. And every single time I think about the point he was making, I still pause and my mind reels. If one is looking at—for example—classical mechanics, and one studies the ballistic equations, one can go along nicely using forces and trigonometry, and understand golf balls and baseballs in flight. Soon you realize your mathematics is only an approximation. So you dive into fluid mechanics, which requires serious calculus, and you then understand why golf balls have dimples and why the stitching on baseballs is strictly specified in the rules. All along the way, mathematics models reality perfectly!

But why? So you keep peeling. The math and physics gets more and more complicated—stochastic processes, randomness, quantum mechanics, wave-particle theory, etc.—as each layer answers another “why”… but it’s … is “cyclical” the right word? No matter how far you go, you can always ask “why” again, for the most complex and most accurate system you model and explain.

Down there at the bottom, that’s where Galileo declared there was a distinction between physical reality, and consciousness and the soul. We’ve had hundreds of years of progress via science on what Galileo divided off as “physical reality.” (And that progress is a Very Good Thing.) But as this article explores, is there actually a distinction? What if making that distinction is a mistake?

ɕ

Gömböc

So in a nutshell, Gömböc is cool, Hungarians are proud of it greatly. So naturally, they made a 4.5-ton statue replica of the shape.

~ Atlas Obscura from, https://www.atlasobscura.com/places/gomboc

I could probably write a blog post about other interesting math-related puzzles and shapes that come from Hungary… or about the number of Hungarian mathematicians… but instead, I’ll just point you towards this particularly interesting thing.

ɕ

Pasteur’s Quadrant

The core idea of Pasteur’s Quadrant is that basic and applied research are not opposed, but orthogonal. Instead of a one-dimensional spectrum, with motion towards “basic” taking you further away from “applied”, and vice versa, he proposes a two-dimensional classification, with one axis being “inspired by the quest for fundamental understanding” and the other being “inspired by considerations of use”

~ Jason Crawford from, https://rootsofprogress.org/pasteurs-quadrant

I’ve put a bit of thought into research. I’ve certainly considered the two properties of “research for understanding” and “research for application”. But I’ve never thought of them as two dimensions. Click through and check out the simple but illuminating quadrant graph.

And I’m immediately wondering: Can I think of a third dimension upon which to plot research? (Field-of-study comes to mind. Time; The thing being studied, is it something that happens in micro-time like particle physics, or macro-time like geology?) I’m also wondering: what other activities could be plotted in a quadrant? (Writing: insight versus length? Coaching: net change in performance versus time spent training?)

ɕ

Information loss

Our lack of perfect information about the world gives rise to all of probability theory, and its usefulness. We know now that the future is inherently unpredictable because not all variables can be known and even the smallest error imaginable in our data very quickly throws off our predictions. The best we can do is estimate the future by generating realistic, useful probabilities.

~ Shane Parrish from, https://fs.blog/2018/05/probabilistic-thinking/

It’s a good article—of course, why would I link you to something I think you should not read?

To be fair, I skimmed it. But all I could think about was this one graduate course I took on Chaos Theory. It sounds like it should be a Star Trek episode. (Star Trek: The Next Generation was in its initial airing at the time.) But it was really an eye-opening class. Here’s this simple idea, called Chaos. And it explains a whole lot of how the universe works. Over-simplified, Chaos is when it is not possible to predict the future state of a system beyond some short timeframe. Somehow, information about the system is lost as time moves forward. (For example, this physical system of a pendulum, hanging from a pendulum… how hard could that be?)

ɕ

That’s a moiré

“You don’t need [machine learning,]” Bryan said. “What you need is inverse Fast Fourier Transform.”

~ “Shift Happens” from, https://www.getrevue.co/profile/shift-happens/issues/moire-no-more-688319

I stumbled over a blog post, containing a pull-quote where someone mentioned inverse Fast Fourier Transform. (A mathematician named Fourier invented a fast way to do a certain sort of transformation that comes up a lot in science; It’s called a Fast Fourier Transform. There’s also a way to undo that transformation, called “the inverse”. Thus, Fast Fourier Transformations (FFT) and inverse FFT. Well, FFT/IFFT is the first thing I can recall that I could not understand. It was shocking. Every other thing I’d ever encountered was easy. But there I was, 20-some-years-old, in graduate school, and I encountered something that was beyond me. I think I had it sorted about 6 times and every time, the next morning, upon waking, it had fallen out of my head. Holy inappropriately long parentheticals, Batman!)

Anyway. Blog post. IFFTs. Time machine to the early 90s. Emotional vertigo.

…and then I clicked thru to the magnificent post which is brilliant. And then I realized the by-line was, “Shift Happens.” o_O This entire thing. I’m in nerd heaven.

ɕ

PS: Sorry, what? Oh, you read my title, heard the Italian word, “amore,” and wanted a, That’s Amore! pun? Okay, here: When an eel climbs a ramp to eat squid from a clamp… Yes. Really.

Why and how

Your ideas are worth less than you think—it’s all about how you execute upon them.

~ Chris Bailey from, https://alifeofproductivity.com/your-ideas-arent-that-unique/

The pull-quote says it all. I recently had a pleasant conversation, wherein the idea of the “why” and the “how” came up. Thanks to Simon Sinek, we all know to, “start with why,” (that is to say, start with the idea.) The idea is important, but it’s literally worthless without the execution. Because anything, multiplied by zero, is zero.

To my 20-something-year-old’s surprise, knowing Al Gebra turned out to actually be useful. Take, for example, evaluating some idea and its execution: The total value could be calculated by multiplying the value of the idea by the value of the execution. (Note my use of, “could be.”) Great ideas are represented by a large, positive value, and terrible ideas by a large, negative value; Similarly for the execution. Great idea multiplied by great execution? Huge total value.

This simple model also shows me how I regularly ruin my life: Terrible idea, (represented by a negative value,) with great execution… Or, great idea, with terrible execution, (represented by a negative value,)… either leads to a large negative total. Interestingly, the slightest negativity—in either of those cases—amplifies the magnitude of the other parameter’s greatness.

This leads to an algebra of idea-and-execution. If you’re going to half-ass the execution, (a negative value,) or you’re concerned that you cannot execute well, it’s better to do so with a “small” idea. Only if you’re sure you can do the execution passably well, (“positive”,) should you try a really great idea. If you work through the logic with the roles flipped, the same feels true. This leads to a question that can be used in the fuzzy, real world: Is this pairing of idea and execution in alignment? Am I pairing the risk of negative-execution align with a “small” idea, or pairing the risk of a bad idea with “small” execution. That to me is a very interesting “soft” analysis tool, which falls surprising out of some very simple algebra.

What I’m not sure about though is what to do with the double-negative scenarios. (Which I’ll leave as an exercise for you, Dear Reader.) Perhaps, I should be using a quadratic equation?

ɕ

Foucault’s Pendulum

Over on the Astronomy Stack Exchange site, (obviously I follow the “new questions” feed in my RSS reader,) someone asked if it was possible, without knowing the date, to determine one’s latitude only by observing the sun. These are the sorts of random questions that grab me by the lapels and shake me until an idea falls out.

So my first thought was: Well if you’re in the arctic or antarctic polar circles you could get a good idea… when you don’t see the sun for a few days. Also, COLD. But that feels like cheating and doesn’t give a specific value. Which left me with this vague feeling that it would take me several months of observations. I could measure the highest position of the sun over the passing days and months and figure out what season I was in…

…wait, actually, I should be able to use knowledge of the Coriolis Force—our old friend that makes water circle drains different in the northern and southern hemispheres, and is the reason that computers [people who compute] were first tasked with complex trigonometry problems when early artillery missed its targets because ballistics “appear” to curve to do this mysterious force because actually the ground rotates . . . where was I?

Coriolis Force, right. But wait! I don’t need the sun at all! All I need is a Foucault Pendulum and some trigonometry… Here I went to Wikipedia and looked it up—which saved me the I’m-afraid-to-actually-try-it hours of trying to derive it in spherical trig… anyway. A Foucault Pendulum exhibits rotation of the plane of the pendulum’s swing. Museums have these multi-story pendulums where the hanging weight knocks over little dominos as it rotates around. Cut to the chase: You only need to be able to estimate the sine function, and enough hours to measure the rotation rate of the swing-plane and you have it all; northern versus southern hemisphere and latitude.

ɕ

Math

Predicting the behaviour of a sigmoid-like process is not fitting the parameters of a logistic curve. Instead, it’s trying to estimate the strength of the dampening term – a term that might be actually invisible in the initial data.

~ Stuart Armstrong from, https://www.lesswrong.com/posts/6tErqpd2tDcpiBrX9/why-sigmoids-are-so-hard-to-predict

Wait! Don’t flee!

It’s a great explanation of sigmoids—you know what those are, but you [probably] didn’t know they have a general name. People toss up sigmoid curves as explanations and evidence all. the. time.

Ever make that slightly squinting face? The one where you turn your head slightly to one side and look dubiously, literally askance at someone? …that face that says, “you keep using that word, but I do not think it means what you think it means.” After you read that little article about sigmoids, you’re going to make that face every time some talking-head tosses up a sigmoid as evidence for a prediction.

ɕ

Second order effects

In short, stop optimizing for today or tomorrow and start playing the long game. That means being less efficient in the short term but more effective in the long term. [… I]f you play the long game you stop optimizing and start thinking ahead to the second-order consequences of your decisions.

~ Shane Parrish from, https://fs.blog/2014/10/an-antifragile-way-of-life/

Fundamentally, we humans and our lives are not mathematically tidy.

Aside: I had a math course once—I can’t even remember the material—and the professor said, “it’s a very subtle point that mathematics should model and predict reality.” …or something to that effect. It was mind-bending; but math is part of reality so why wouldn’t reality model itself? *smoke-emits-from-my-ears* The scene, the room, the lighting, everything are burned into my brain.

Heuristics are always and in all cases true but sort of false, because they are imperfect. But the purpose of heuristics is to enable us to wrap our meager brains around the vastly complicated universe. Maths, as in compound interest, exponential growth, 1/r^2 forces, and Fourier transformations, provide models of reality. The comment about second order consequences challenges us to dig deeper into our heuristics, (which are otherwise known more generally as “models.”)

I’ve said this before, here on the blog and out loud: Have you intentionally created the models you have of the world?

ɕ

The answer is, “2.”

In this situation, before committing to a three year PhD, you better make sure you spend three months trying out research in an internship. And before that, it seems a wise use of your time to allocate three days to try out research on your own. And you better spend three minutes beforehand thinking about whether you like research.

~ ‘jsevillamol’ from, https://www.lesswrong.com/posts/eZCrCB3HiDB55Ccqx/spend-twice-as-much-effort-every-time-you-attempt-to-solve-a

This one caught my eye because the vague heuristic of spending increasing amounts of effort at each attempt to solve a problem felt true. But I was thinking of it from the point of view of fixing some process— Like a broken software system that occasionally catches fire. Putting the fire out is trivial, but the second time I start trying to prevent that little fire. The third time I find I’m more curious as to why does it catch fire, and why didn’t my first fix make a difference. The fourth time I’m taking off the kid gloves and bringing in industrial lighting, and power tools. The fifth time I’m roping in mathematicians and textbooks and wondering if I’m trying to solve the Halting Problem.

Turns out the context of the problem doesn’t matter. The answer is, “2.” Every time you attempt to solve a problem—any sort of problem, any context, any challenge, any unknown—the most efficient application of your effort is to expend just a bit less than twice the effort of your last attempt.

Not, “it feels like twice would be good,” but rather: Doubling your efforts each time is literally the best course of action.

…and now that I’ve written this. My brain dredges up the Exponential Backoff algorithm. That’s been packed in the back of my brain for 30 years. I’ve always known that was the chosen solution to a very hard problem. (“Hard,” as in proven to be impossible to solve generally, so one needs a heuristic and some hope.) They didn’t just pick that algorithm; Turns out it’s the actual best solution.

ɕ

The Honeybee Conjecture

More than 2,000 years ago, Marcus Terentius Varro, a roman citizen, proposed an answer, which ever since has been called “The Honeybee Conjecture.” He thought that if we better understood, there would be an elegant reason for what we see. “The Honeybee Conjecture” is an example of mathematics unlocking a mystery of nature.

~ From https://fs.blog/2013/05/what-is-it-about-bees-and-hexagons/

Every once in a while, you will have the chance to be alive when a multi-thousand-year old mystery is solved. Humans are awesome. Mathematics for the win. *drops mic*

ɕ

Most people are not yet born

[…] recognize that at least in terms of sheer numbers, the current population is easily outweighed by all those who will come after us. In a calculation made by writer Richard Fisher, around 100 billion people have lived and died in the past 50,000 years. But they, together with the 7.7 billion people currently alive, are far outweighed by the estimated 6.75 trillion people who will be born over the next 50,000 years, if this century’s birth rate is maintained (see graphic below). Even in just the next millennium, more than 135 billion people are likely to be born. 

~ Roman Krznaric from, https://blog.longnow.org/02020/07/20/six-ways-to-think-long-term-a-cognitive-toolkit-for-good-ancestors/

50,000 years is, of course, somewhat arbitrary. But it’s a good estimate of the span so far of recognizably-like-current-us human history. It’s obvious that today, most people are already dead. It’s those trillion yet to come that warp the brain and create perspective.

This article from The Long Now Foundation has 6 good examples of explicit ways to think long-term, rather than short-term.

ɕ

Est provocationem

Today, I’m drawn to considering refinement of an idea I’ve mentioned a few times.

It’s clear to me that it’s impossible to be happy if my mind is unable to focus. Years ago, I regained my ability to intentionally focus, by disempowering the world—disabling as many as possible of the pathways for everything and everyone to actively grab my attention. Then I set about railing against everyone who has not yet regained their own ability to focus, (or perhaps, has never learned to focus.)

Internally, I often use an idea which I believe I stole from mathematical analysis. When facing some question, the idea is to find the largest contributor, and get a handle on that first; that’s the first-order item. Then find the next largest contributor, and get that second-order item sorted. And so on.

A few decades ago, the largest impediment to my being able to intentionally focus was external distraction. Having now sorted that first-order item, I can turn to the second-order item: Everyone’s inability to focus is polluting my attention. I remain easily distracted by others’ inability to focus.

Est provocationem!

ɕ

Heating with math

Something a little different today: I’ve been considering switching to heating with gas and I recently ran some numbers.

tl;dr: I will be continuing to heat with solid fuel.

Preamble: We have already deeply insulated our attic, upgraded insulation in the walls which were opened during some remodeling, and replaced all windows and doors with modern versions. (Our house was originally built in 1954.) This is the obvious first place to begin improving heating your home.

Electricity: My electricity costs $0.0758 per kWh. I can basically turn on my electric baseboard heaters and this is what I’d pay (per kWh) to heat our house.

Methane: This is the proper way to heat a home in northern climates. Unfortunately, “street gas” is not present in my neighborhood. One block over, yes, here, no. They would install it for me… if I’m willing to pay the entire cost to rip up the street and put in the gas main.

Propane: Chemistry geeks know that propane has about 12% less energy per molecule compared to methane. But generally speaking, appliances (my gas cooking stove, a gas heating appliance I would need to buy/install) can be adjusted to burn either fuel. Anyway. I already have a small propane tank that serves my cooking stove, so I would “just” need a larger tank — possibly MUCH larger, possibly so large that safety ordinances would require me to put it underground. Anyway. My propane costs me $5.999/gal — if you know about petroleum, this is an incomprehensibly high number. Meanwhile, 1 gal propane = 27kWhr of energy. And a gas heater (I’m imagining replacing my wood stove with an appliance that sits in the same space) is effectively 100% efficient at turning that gas into heat. So simple math shows that propane would cost me $0.222/KWh — about THREE times the cost of electricity.

Firewood: This is MUCH harder to compute. First off, I have to estimate how much energy is available in the wood I’m burning; that’s affected by species of wood, and how it’s seasoned and stored (because the MORE water in the wood, the more heat is “lost” to vaporize that water and send it away up the chimney.) Some factors to consider: Where I live, there are several readily available “fuel” species of trees that are sustainably available. I’ve found a reputable supplier who is not hauling it long distances and provides me the right sizes etc for what I want. I also have the absolute best imaginable way of storing the wood in “cribs” that expose it to air drying while having it under cover.

So I’m guessing 20 million BTU per cord. (A cord is a stacked, pile 4 feet tall, with a foot print of 4×8 feet. Technically, it’s a pile of 4-foot LONG logs, 4 feet high and 8 feet wide on the ground. A true wood heating system is a separate unit outside that is meant to take 4 foot long logs. I purchase ~16″ pieces split, which still makes the 4×8 foot print computable. I digress.) Good fuel species can be up to 30MBTU/cord. So I’m being conservative with 20.

20M BTU is 5,861 KWh. I pay $300 per cord (fellow Pennsylvanians just twitched because that is pretty expensive — 225 or 250 is typical — but this is excellent wood species, all cut and split to the correct sizes for stove fuel, delivered early in the season, and dumped exactly where I want it. As usual, I digress. So math happens leading to $0.0512 / KWh. Even if I figure-in that the wood stove is only 80% efficient (we have a great stove made in Scandinavia which really does exceed 80% efficiency when operated correctly), that only bumps the cost up to $0.0639 / KWh.

Update in 2019: My electricity costs $0.07039 per KWh. (That’s down about 1/2 cent.) I’ve a new firewood supplier, with the price down to $225 per cord. That’s $0.038 / KWh, and still only $0.048 / KWh at 80% stove efficiency.

And finally some references…

http://www.propane101.com/propanevselectricity.htm

Firewood BTU Ratings

ɕ