Pasteur’s Quadrant

The core idea of Pasteur’s Quadrant is that basic and applied research are not opposed, but orthogonal. Instead of a one-dimensional spectrum, with motion towards “basic” taking you further away from “applied”, and vice versa, he proposes a two-dimensional classification, with one axis being “inspired by the quest for fundamental understanding” and the other being “inspired by considerations of use”

~ Jason Crawford from,

I’ve put a bit of thought into research. I’ve certainly considered the two properties of “research for understanding” and “research for application”. But I’ve never thought of them as two dimensions. Click through and check out the simple but illuminating quadrant graph.

And I’m immediately wondering: Can I think of a third dimension upon which to plot research? (Field-of-study comes to mind. Time; The thing being studied, is it something that happens in micro-time like particle physics, or macro-time like geology?) I’m also wondering: what other activities could be plotted in a quadrant? (Writing: insight versus length? Coaching: net change in performance versus time spent training?)


Information loss

Our lack of perfect information about the world gives rise to all of probability theory, and its usefulness. We know now that the future is inherently unpredictable because not all variables can be known and even the smallest error imaginable in our data very quickly throws off our predictions. The best we can do is estimate the future by generating realistic, useful probabilities.

~ Shane Parrish from,

It’s a good article—of course, why would I link you to something I think you should not read?

To be fair, I skimmed it. But all I could think about was this one graduate course I took on Chaos Theory. It sounds like it should be a Star Trek episode. (Star Trek: The Next Generation was in its initial airing at the time.) But it was really an eye-opening class. Here’s this simple idea, called Chaos. And it explains a whole lot of how the universe works. Over-simplified, Chaos is when it is not possible to predict the future state of a system beyond some short timeframe. Somehow, information about the system is lost as time moves forward. (For example, this physical system of a pendulum, hanging from a pendulum… how hard could that be?)


That’s a moiré

“You don’t need [machine learning,]” Bryan said. “What you need is inverse Fast Fourier Transform.”

~ “Shift Happens” from,

I stumbled over a blog post, containing a pull-quote where someone mentioned inverse Fast Fourier Transform. (A mathematician named Fourier invented a fast way to do a certain sort of transformation that comes up a lot in science; It’s called a Fast Fourier Transform. There’s also a way to undo that transformation, called “the inverse”. Thus, Fast Fourier Transformations (FFT) and inverse FFT. Well, FFT/IFFT is the first thing I can recall that I could not understand. It was shocking. Every other thing I’d ever encountered was easy. But there I was, 20-some-years-old, in graduate school, and I encountered something that was beyond me. I think I had it sorted about 6 times and every time, the next morning, upon waking, it had fallen out of my head. Holy inappropriately long parentheticals, Batman!)

Anyway. Blog post. IFFTs. Time machine to the early 90s. Emotional vertigo.

…and then I clicked thru to the magnificent post which is brilliant. And then I realized the by-line was, “Shift Happens.” o_O This entire thing. I’m in nerd heaven.


PS: Sorry, what? Oh, you read my title, heard the Italian word, “amore,” and wanted a, That’s Amore! pun? Okay, here: When an eel climbs a ramp to eat squid from a clamp… Yes. Really.

Why and how

Your ideas are worth less than you think—it’s all about how you execute upon them.

~ Chris Bailey from,

The pull-quote says it all. I recently had a pleasant conversation, wherein the idea of the “why” and the “how” came up. Thanks to Simon Sinek, we all know to, “start with why,” (that is to say, start with the idea.) The idea is important, but it’s literally worthless without the execution. Because anything, multiplied by zero, is zero.

To my 20-something-year-old’s surprise, knowing Al Gebra turned out to actually be useful. Take, for example, evaluating some idea and its execution: The total value could be calculated by multiplying the value of the idea by the value of the execution. (Note my use of, “could be.”) Great ideas are represented by a large, positive value, and terrible ideas by a large, negative value; Similarly for the execution. Great idea multiplied by great execution? Huge total value.

This simple model also shows me how I regularly ruin my life: Terrible idea, (represented by a negative value,) with great execution… Or, great idea, with terrible execution, (represented by a negative value,)… either leads to a large negative total. Interestingly, the slightest negativity—in either of those cases—amplifies the magnitude of the other parameter’s greatness.

This leads to an algebra of idea-and-execution. If you’re going to half-ass the execution, (a negative value,) or you’re concerned that you cannot execute well, it’s better to do so with a “small” idea. Only if you’re sure you can do the execution passably well, (“positive”,) should you try a really great idea. If you work through the logic with the roles flipped, the same feels true. This leads to a question that can be used in the fuzzy, real world: Is this pairing of idea and execution in alignment? Am I pairing the risk of negative-execution align with a “small” idea, or pairing the risk of a bad idea with “small” execution. That to me is a very interesting “soft” analysis tool, which falls surprising out of some very simple algebra.

What I’m not sure about though is what to do with the double-negative scenarios. (Which I’ll leave as an exercise for you, Dear Reader.) Perhaps, I should be using a quadratic equation?


Foucault’s Pendulum

Over on the Astronomy Stack Exchange site, (obviously I follow the “new questions” feed in my RSS reader,) someone asked if it was possible, without knowing the date, to determine one’s latitude only by observing the sun. These are the sorts of random questions that grab me by the lapels and shake me until an idea falls out.

So my first thought was: Well if you’re in the arctic or antarctic polar circles you could get a good idea… when you don’t see the sun for a few days. Also, COLD. But that feels like cheating and doesn’t give a specific value. Which left me with this vague feeling that it would take me several months of observations. I could measure the highest position of the sun over the passing days and months and figure out what season I was in…

…wait, actually, I should be able to use knowledge of the Coriolis Force—our old friend that makes water circle drains different in the northern and southern hemispheres, and is the reason that computers [people who compute] were first tasked with complex trigonometry problems when early artillery missed its targets because ballistics “appear” to curve to do this mysterious force because actually the ground rotates . . . where was I?

Coriolis Force, right. But wait! I don’t need the sun at all! All I need is a Foucault Pendulum and some trigonometry… Here I went to Wikipedia and looked it up—which saved me the I’m-afraid-to-actually-try-it hours of trying to derive it in spherical trig… anyway. A Foucault Pendulum exhibits rotation of the plane of the pendulum’s swing. Museums have these multi-story pendulums where the hanging weight knocks over little dominos as it rotates around. Cut to the chase: You only need to be able to estimate the sine function, and enough hours to measure the rotation rate of the swing-plane and you have it all; northern versus southern hemisphere and latitude.



Predicting the behaviour of a sigmoid-like process is not fitting the parameters of a logistic curve. Instead, it’s trying to estimate the strength of the dampening term – a term that might be actually invisible in the initial data.

~ Stuart Armstrong from,

Wait! Don’t flee!

It’s a great explanation of sigmoids—you know what those are, but you [probably] didn’t know they have a general name. People toss up sigmoid curves as explanations and evidence all. the. time.

Ever make that slightly squinting face? The one where you turn your head slightly to one side and look dubiously, literally askance at someone? …that face that says, “you keep using that word, but I do not think it means what you think it means.” After you read that little article about sigmoids, you’re going to make that face every time some talking-head tosses up a sigmoid as evidence for a prediction.


Second order effects

In short, stop optimizing for today or tomorrow and start playing the long game. That means being less efficient in the short term but more effective in the long term. [… I]f you play the long game you stop optimizing and start thinking ahead to the second-order consequences of your decisions.

~ Shane Parrish from,

Fundamentally, we humans and our lives are not mathematically tidy.

Aside: I had a math course once—I can’t even remember the material—and the professor said, “it’s a very subtle point that mathematics should model and predict reality.” …or something to that effect. It was mind-bending; but math is part of reality so why wouldn’t reality model itself? *smoke-emits-from-my-ears* The scene, the room, the lighting, everything are burned into my brain.

Heuristics are always and in all cases true but sort of false, because they are imperfect. But the purpose of heuristics is to enable us to wrap our meager brains around the vastly complicated universe. Maths, as in compound interest, exponential growth, 1/r^2 forces, and Fourier transformations, provide models of reality. The comment about second order consequences challenges us to dig deeper into our heuristics, (which are otherwise known more generally as “models.”)

I’ve said this before, here on the blog and out loud: Have you intentionally created the models you have of the world?


The answer is, “2.”

In this situation, before committing to a three year PhD, you better make sure you spend three months trying out research in an internship. And before that, it seems a wise use of your time to allocate three days to try out research on your own. And you better spend three minutes beforehand thinking about whether you like research.

~ ‘jsevillamol’ from,

This one caught my eye because the vague heuristic of spending increasing amounts of effort at each attempt to solve a problem felt true. But I was thinking of it from the point of view of fixing some process— Like a broken software system that occasionally catches fire. Putting the fire out is trivial, but the second time I start trying to prevent that little fire. The third time I find I’m more curious as to why does it catch fire, and why didn’t my first fix make a difference. The fourth time I’m taking off the kid gloves and bringing in industrial lighting, and power tools. The fifth time I’m roping in mathematicians and textbooks and wondering if I’m trying to solve the Halting Problem.

Turns out the context of the problem doesn’t matter. The answer is, “2.” Every time you attempt to solve a problem—any sort of problem, any context, any challenge, any unknown—the most efficient application of your effort is to expend just a bit less than twice the effort of your last attempt.

Not, “it feels like twice would be good,” but rather: Doubling your efforts each time is literally the best course of action.

…and now that I’ve written this. My brain dredges up the Exponential Backoff algorithm. That’s been packed in the back of my brain for 30 years. I’ve always known that was the chosen solution to a very hard problem. (“Hard,” as in proven to be impossible to solve generally, so one needs a heuristic and some hope.) They didn’t just pick that algorithm; Turns out it’s the actual best solution.


The Honeybee Conjecture

More than 2,000 years ago, Marcus Terentius Varro, a roman citizen, proposed an answer, which ever since has been called “The Honeybee Conjecture.” He thought that if we better understood, there would be an elegant reason for what we see. “The Honeybee Conjecture” is an example of mathematics unlocking a mystery of nature.

~ From

Every once in a while, you will have the chance to be alive when a multi-thousand-year old mystery is solved. Humans are awesome. Mathematics for the win. *drops mic*


Most people are not yet born

[…] recognize that at least in terms of sheer numbers, the current population is easily outweighed by all those who will come after us. In a calculation made by writer Richard Fisher, around 100 billion people have lived and died in the past 50,000 years. But they, together with the 7.7 billion people currently alive, are far outweighed by the estimated 6.75 trillion people who will be born over the next 50,000 years, if this century’s birth rate is maintained (see graphic below). Even in just the next millennium, more than 135 billion people are likely to be born. 

~ Roman Krznaric from,

50,000 years is, of course, somewhat arbitrary. But it’s a good estimate of the span so far of recognizably-like-current-us human history. It’s obvious that today, most people are already dead. It’s those trillion yet to come that warp the brain and create perspective.

This article from The Long Now Foundation has 6 good examples of explicit ways to think long-term, rather than short-term.