Subscribe to Annie's Substack

“Different Kinds of Bad Luck” — Annie’s Newsletter, December 14, 2018

DIFFERENT KINDS OF BAD LUCK - When luck keeps information hidden from view BLIND TO LUCK - How WE keep luck hidden from view BEING SMART MAKES IT WORSE, REDUX – And how science curiosity helps THE STRONG BOND OF DISLIKE - Liking things gets you admiration, but disliking things gets you everywhere
This will be the final edition of the newsletter for 2018. The newsletter will return on Friday, January 18, 2019. Have a wonderful holiday season! – xoxo Annie



When luck keeps information hidden from view

This is how we usually think of luck: when we make a decision, we recognize there are a range of outcomes. A good decision improves the likelihood we’ll be on the positive end of that range.

But once we have made the decision, the outcome that actually happens is a matter of luck.

When Pete Carroll called the pass play in the Super Bowl, there were a range of outcomes for that play: touchdown, incompletion, inability to throw the pass (sack, fumble, scramble-touchdown), and interception.

The quality of Carroll’s decision depends on the likelihood of winning the game with that play compared to the other options available.

But once he calls the play, the outcome that actually happens isn’t in his control. It’s a matter of luck.

We think about luck as bad if the result of a decision is on the undesirable end of the range of things that could happen, especially if the bad outcome is really unlikely. The result of that pass play that Pete Carroll called, the game-ending interception, had a likelihood of only 1-2%.

That’s bad luck.

But there is another way to think about bad luck. Bad luck doesn’t just apply to falling on the bad end of a range of uncertain, undetermined outcomes, some of which are good and some of which are bad to varying degrees and likelihoods.

You can make a perfectly good decision and, because of bad luck, there is only one outcome that could result from that decision and that outcome is bad.

That can happen when the information that would let us know things can only turn out poorly is hidden from view and undiscoverable.

If we approach an intersection and proceed through a green light, it’s obviously a good decision.

But what if the traffic signal is broken and drivers approaching the intersection from the other road also have a green light?

We can’t see this malfunction from our perspective. We can’t know that all the lights are now green. That information is hidden from us.

If we collide with a car crossing from another direction on their malfunctioning green light, we made a good decision, but instead of having the likelihood of a good outcome far above 99%, we’re playing Russian Roulette.

That’s bad luck.

And that’s how much of a separation there can be between decision quality and outcome quality.

We need to be aware of the influence of luck on the information that we can bring to bear on our decisions. And we also need to be careful about how we use luck to explain a bad outcome.

It’s true that we can make a good decision with good information and because there is a range of possible ways things might turn out, we can end up with a bad result. Sometimes it’s not our decision quality, it’s the luck of the draw.

But the possibility of luck as the reason for a poor outcome can also become a way to offload responsibility. We tend to reflexively blame our own bad outcomes on luck, on not having control over which among the range of possible outcomes actually occurs.

That’s self-serving bias and we lose of a lot of learning opportunities to it.

Similarly, crucial information being hidden from view may be the primary reason for a poor outcome. If we couldn’t have known, then we can still have made a great decision where luck intervened in the form of hiding a key input.

But the possibility that we couldn’t have known can also be a way to offload responsibility for the way things turn out.

Have you ever said, “How could I have known that?”

I know I have.

Sometimes, maybe you couldn’t have known. But sometimes you may be using that possibility to wriggle away from the bad feeling that a poor outcome is due to your poor decision making.

It’s nothing to feel bad about if some piece of information reveals itself after the fact that would have changed our decision. That happens to all of us. We aren’t perfect.

It’s only not fine when we don’t examine whether we could have known.

Sometimes the answer will be no. We could not have known.

But sometimes, upon closer examination, we find out we could have discovered that information in advance.

In that case, we shouldn’t beat ourselves up as long as we incorporate what we learned into our process going forward.

But if we just say, “I couldn’t have known. That was bad luck,” we are making a poor trade, making ourselves feel better in the moment in exchange for missing learning opportunities that will  improve future decisions.


How WE keep luck hidden from view

A recent David Dunning tweet pointed me toward a remarkable post on the Stumbling and Mumbling blog, titled “Blind to luck.” The piece is a meditation on the Tim Hartford quote, “It’s easy to overlook luck.”

I could quote every line as an interesting finding or lesson, but I’ll share just a few, and I hope you’ll read the blog yourself.

Nattavudh Powdthavee and Yohanes Riyanto found that students in Singapore and Thailand betting on tosses of a fair coin were willing to pay to back the bet of students who had correctly called previous tosses. Research in Barcelona found exactly the same thing.

That’s pretty crazy. We all know that the flip of a coin is random. Yet people will pay for the opportunity to bet on someone who randomly called a coin correctly. No skill. Just luck. And they will pay for the chance.

The blog also illustrates the good-luck dimension of self-serving bias with this quote: “As Ed Smith writes in his lovely book, Luck: ‘randomness is routinely misinterpreted as skill.’”

One side of self-serving bias is that we blame bad outcomes on bad luck. But the other side of the bias is that we take credit for good outcomes that are due to randomness. This is what Smith is getting at.

Or as E.B. White so perfectly said, “Never mention luck in the presence of a self-made man.”

I would have tweeted the blog’s great conclusion, but Dunning got there first:

These are a few of the gems in this short blog. And there are plenty more; I left them for you to find on your own.


But science curiosity helps

Dan Kahan recently wrote a piece for Scientific American on the effects of tribe, science literacy, and science curiosity on motivated reasoning, “Why Smart People Are Vulnerable to Putting Tribe Before Truth.”

The article is a nice overview of why being smart (in this piece the focus is on science-literacy smarts) makes tribal reasoning worse, while being science-curious makes it better.

As people become more scientifically literate, more adept at scientific reasoning, they become more polarized in their opinions on the politically-charged issues of climate change, nuclear power, gun control, and fracking.

We know that science-literate people are capable of interpreting data and research findings more accurately. We also know they are better at fitting the evidence to their partisan point of view.

But they don’t always do the former (truth-finding), and they frequently do the latter (motivated reasoning).

The question is why?

Kahan suggests that it has to do skin in the game.

Take climate change. Few members of the public, as individuals, can have any significant impact on climate. Therefore, if they misinterpret the science, there really aren’t any consequences, certainly not in the form of a noticeable impact on climate change or climate policy.

Getting it wrong isn’t a high-stakes mistake for what happens with nuclear power or gun control or fracking.

But if you use your science proficiency to signal your tribal membership and improve your status in the tribe, that’s valuable.

On the flip side, send out the wrong signal and you might be ostracized by the tribe.

That’s where the skin in the game is.

Kahan’s article concludes with some good news. Curiosity – “a hunger for the unexpected, driven by the anticipated pleasure of surprise” – can reduce tribally motivated reasoning.

People who score high on the Science Curiosity Scale are less divided. According to experimental data

“Afforded a choice, low-curiosity individuals opt for familiar evidence consistent with what they already believe; high-curiosity citizens, in contrast, prefer to explore novel findings, even if that information implies that their group’s position is wrong. Consuming a richer diet of information, high-curiosity citizens predictably form less one-sided and hence less polarized views.”

Most of us harbor a belief that, armed with our knowledge and abilities, we can avoid cognitive traps. Jim O’Shaughnessy, whose outstanding manifesto on investing and probabilistic thinking I shared in last week’s newsletter, captures this attitude:

“24/I think I know that the majority of active stock market investors—both professional and aficionado—will secretly believe that while these human foibles that make investing hard apply to others, they don’t apply to them.”

This latest article from Kahan offers a silver lining to the bad news that our smarts don’t protect us from motivated reasoning by offering a practical, appealing antidote to bias: Curiosity.


Liking things gets you admiration, but disliking things gets you everywhere

Markham Heid recently wrote a piece on Medium with the provocative (and descriptive) title, “How Shared Hatred Helps You Make Friends.”

The piece is a good review of research on how, while shared opinions create bonds between people, it is especially true of shared dislikes.

Shared dislikes actually create faster, closer bonds than when people bond over liking the same things.

This sheds light on tribalism on numerous levels: in individual relationships, on social media, and in our polarized political system.

We know that two of the things we get from tribe are belongingnessand distinctiveness. Belongingness comes from membership in a group with shared interests and goals. Distinctiveness comes from having different likes and goals than those in other tribes.

But in another way, we can think of belongingness as cohesion around shared likes and distinctiveness as cohesion around shared dislikes.

For example, let’s say you’re waiting for your espresso macchiato at Starbucks and the person next to you is also waiting for an espresso macchiato. “Hey, we both like espresso macchiato!” That shared like is part of belongingness.

Or you’re in that line and hear someone at a table talking on their phone, oversharing in a way-too-loud voice. You subtly cringe and shake your head. The person next to you nods and rolls their eyes. Together, you’re communicating, “We both hate inconsiderate people with no manners or boundaries.”

That’s distinctiveness.

In the first example, what we share is inclusive. (“We’re both expresso macchiato lovers.”) In the second, what we share is exclusive. (“We both dislike that loud, rude, inconsiderate person so we are not like them.”)

We’d like to think belongingness is the stronger bond, but we’re seeing that distinctiveness is an incredibly strong social need. When people bond over a shared negative opinion, they’re bonding over the way they are distinct from someone out-of-tribe.

The piece cites several experiments by psychology professor Jennifer Bosson, finding that

“Disliking the same thing about a person can help strangers bond more effectively than if they share the same positive opinions. The stronger the shared dislike, the closer the resulting bond is likely to be.”

This also applies to larger groups. Synthesizing the findings of several researchers, Heid explains that, “Even within groups or tribes, shared negative opinions are often more appealing to us than shared positive ones.”

That’s a pretty good summary of what makes Twitter tick.

The piece also quotes psychology professor Frank McAndrew to this point:

“’The internet is taking our primitive thirst for gossip and reputation-seeking and bonding with like-minded people and amplifying it a thousand times,’ McAndrew says. Sharing hatred online may be a great way for disparate communities of strangers to form bonds and to feel more connected with one another, but it can also delude us into thinking our poor opinions about certain groups are normal and justified, he says.”

This, in turn, leads to bonding politically over what we dislike. Political tribes can appeal to common values (belongingness) but what’s polarizing us is the appeal to the awfulness of the other side (distinctiveness).

Distinctiveness, itself, can be self-elevating, but it feels like we elevate ourselves more by criticizing and denigrating others.

We see everywhere that political interest is reaching high levels: voter turnout in the midterm elections, the attention devoted to political news, the attention to political topics on social media, and even the attention late-night talk shows pay to the latest political developments.

Yet we’re not bragging so much about the greatness of our side’s leaders and their values. What is bonding us together politically is not so much our common values anymore.

Increasingly, it’s hatred of the other side. 

According to a Pew Research Center study, Democrats with “very unfavorable” attitudes about Republicans rose from 16% in 1994 to 38% in 2014. For Republicans, “very unfavorable” attitudes about Democrats rose from 17% to 43%.”

27% of Democrats thought the GOP was a threat to the well-being of the country. 36% of Republicans thought this about Democrats.

And that was back in 2014! As I pointed out in the October 12 newsletter (thanks to a Jon Haidt tweet, referring to a recently published book by Marc Hetherington and Jonathan WeilerPrius or Pickup?: How the Answers to Four Simple Questions Explain America’s Great Divide), political hatred of the other party is on the rise.


The paper plate illusion

I don’t even remember where I first saw this, but it’s gone viral.