“A Good Framework For Two Kinds of Uncertainty” – Annie’s Newsletter, June 1st, 2018

A GOOD FRAMEWORK FOR TWO KINDS OF UNCERTAINTY:
Aleatory (luck) & Epistemic (hidden information) –
Explained in a paper by Craig Fox and Gulden Ulkumen
h/t @CSBowles

After I appeared on the Rationally Speaking podcast with @JuliaGalef, a listener named Stuart Bowles (@CSBowles) was nice enough to share a relevant paper differentiating two kinds of uncertainty.

"A Good Framework For Two Kinds of Uncertainty" - Annie’s Newsletter, June 1st, 2018

Thank you, Stuart, for bringing the work to my attention. The paper, “Distinguishing Two Dimensions of Uncertainty,” by Craig Fox and Gulden Ulkumen, provides a theoretical history and explanation for one of my favorite poker-based illustrations.

If you’ve read Thinking in Bets or heard me speak, you’re probably aware that I focus on two forms of uncertainty in poker: hidden (or unknown) information and luck.

  • Hidden information creates epistemic uncertainty– missing knowledge.
  • Luck creates aleatory uncertainty – the stochastic nature of the future.

In poker, epistemic uncertainty derives from your opponents’ cards being face down (and not knowing how your opponent will react to those cards or any action you might take at the table). Aleatory uncertainty derives from the random deal of the cards.

The paper starts with a very good example of both forms of uncertainty in President Obama’s decision to call for the attack on the compound believed to be sheltering Osama bin Laden.

First was the uncertainty of hidden information. For example, was bin Laden in the compound?

Second was the aleatory uncertainty of all the possible ways the mission could unfold. Examples of this would include the possibility of mechanical failure and the effectiveness on that particular night of the U.S. troops and bin Laden’s protectors.

I’ve used an example of flipping an ordinary coin to explain aleatory uncertainty. You can know everything about the coin, be epistemically certain about how often it will flip heads or tails in the long run, but you can’t be certain how it will land on the next flip.

That’s the stochastic nature of the future.

I highly recommend reading the paper for a better understanding of the two forms of uncertainty and the importance, in embracing uncertainty, of understanding their features and differences.

As the authors point out, since Pascal and Fermat in the seventeenth century, probability theorists have advanced different concepts based on these elements of uncertainty. That split continues:

"A Good Framework For Two Kinds of Uncertainty" - Annie’s Newsletter, June 1st, 2018

If you want to take a deep dive into uncertainty, this is a good place to go!


RESULTS, DECISION QUALITY, AND GOAL-SETTING
Thanks for the questions @HueyKwik

After the podcast with Julia, another listener, @HueyKwik, asked the following two-part question:

"A Good Framework For Two Kinds of Uncertainty" - Annie’s Newsletter, June 1st, 2018
I’m assuming your first question is about how much weight you should give a result as a signal of decision quality. You should always be evaluating your decision process and your beliefs. Looking at the result is part of that.

You should pay very little attention to the outcome if it’s a single result of a stochastic process.

If you happen to be in a situation where the outcome is very tightly linked to decision quality, you can look at one outcome and use that as feedback.

If you’re playing chess, and you lose, you can use that as feedback as to your relative skill compared to your opponent.

If you’re driving on the wrong side of the road and you get to an accident, you can use that as feedback about the quality of the decision to drive on the wrong side of the road.

Those outcomes are tightly linked to the decisions that preceded them.

But in most things, there’s a relatively loose connection between outcomes and decision quality. When there is a loose connection between outcome quality and decision quality, the data is noisy. So you should give less weight to any single result.

Single results can actually be counterproductive by casting a strong cognitive shadow over your ability to see clearly through to the quality of the decision.

Regarding your second question (goal-setting) – For teams, make sure the team is rewarding behavior promoting better decision quality rather than setting goals that are outcome-oriented.

What’s the environment like for the team?

Are members encouraged to express contrary opinions and play “devil’s advocate”?

What happens when someone admits they made a mistake or changed their mind?

I tweeted earlier this week about Etsy’s “Blameless Post-Mortem” process, following engineering errors:
"A Good Framework For Two Kinds of Uncertainty" - Annie’s Newsletter, June 1st, 2018

As far as setting goals for yourself, the goals themselves are pretty simple:

Am I examining my own opinions?

Am I being open-minded to other views?

Am I giving credit and accepting blame (instead of following the tendency to do the opposite)?

There are plenty of other similar measures.

The tricky part is creating accountability for those goals. In time, you can learn to feel good about identifying your mistakes, or about learning something from a source you’d otherwise ignore.

But our default is to defend our self-narrative against doing those things.

That’s why it’s good, if you don’t have a team, to get one. It helps enormously to have someone complimenting you for questioning your beliefs or decisions (and reminding you when you’re not).


DECISION-SCIENCE PLEASURE READING
@HBSWK’s article collecting its recent work on decision making

On Twitter earlier this week, I pointed out that there’s a great collection of recent Harvard Business School Working Knowledge decision-making articles:
"A Good Framework For Two Kinds of Uncertainty" - Annie’s Newsletter, June 1st, 2018
(Apologies to Sean Silverthorne, EIC of HBSWK, whose Twitter name I misspelled. It should be @SeanSilverthorn.)

I was pleased to see that several people who follow me on Twitter look at items like this that I link to.

Even better, some had amusing comments!

Asset manager Jim O’Shaughnessy (@JPOShaughnessy):
"A Good Framework For Two Kinds of Uncertainty" - Annie’s Newsletter, June 1st, 2018
Economist Mark Frost (@FrostieCash):
"A Good Framework For Two Kinds of Uncertainty" - Annie’s Newsletter, June 1st, 2018
I initially tweeted it because the collection of articles seemed worth checking out. I’m glad it’s been fun and worthwhile.


IF YOUR CONSTITUENCY IS LIKELY TO RESULT, SHOULD THAT AFFECT YOUR DECISIONS?
Revisiting Pete Carroll’s controversial Super Bowl play-call

One of the stories from Thinking in Bets that has drawn the most attention is how Seattle Seahawks coach Pete Carroll was excoriated for his play-calling after QB Russell Wilson’s pass was intercepted in the closing seconds of the 2015 Super Bowl against New England. (It leads off the first chapter of the book. You can read the story on Amazon.com by using the “Look Inside” feature.)

Carroll had reasonable justifications grounded in math (getting three shots at a touchdown instead of only two if the first play is a pass) and strategy (being capable of calling a wider range of plays to keep opponents from getting an advantage by anticipating your actions).

The chance of an interception in that situation, taking into account the last 15 years of data, was only 1-2%. But that’s what happened, and that improbable result became the basis of widespread criticism (“dumbest call in Super Bowl history”).

Apparently, some players on the Seahawks began questioning Carroll’s coaching after that play. That’s according to an article on @ESPNdescribing comments by recently-released Seahawk Cliff Avril on a recent podcast.

This raises an interesting question: If a bad outcome following an unconventional choice will cause lasting effects on morale, does it make sense to forgo a mathematically/strategically better play given the downstream effects of resulting?

Ideally, you want to develop a culture where the people involved focus on process rather than short-term results.

If you can get your constituency to buy into that, creating a culture that embraces process-driven decision-making, you can make the highest-equity decision without worrying about downstream erosion of morale.

That’s easier said than done.

After all, Pete Carroll had already coached the Seahawks to a Super Bowl victory the year before. You’d think there would be buy-in to his process given his record.

Perhaps it’s especially difficult to get away from results-oriented thinking in sports.

(I previously wrote in the newsletter about lessons related to sports-related resulting, in items about a study of NBA coaching decisions and in the resignation letter of former Philadelphia 76ers head of basketball operations Sam Hinkie.)

In the article, Avril said the following:
"A Good Framework For Two Kinds of Uncertainty" - Annie’s Newsletter, June 1st, 2018
At the very least, stories about discord among the Seahawks point out the importance and difficulty of getting people to buy into a process-oriented culture.

There’s obviously some disconnect if players believe a play-call represents the coach’s judgment about team members.


THIS WEEK’S ILLUSION
A classic – the Ebbinghaus illusion

Thanks to @UofGCSPE for collecting these on IllusionsIndex.org.
"A Good Framework For Two Kinds of Uncertainty" - Annie’s Newsletter, June 1st, 2018