“Epistemic Spillovers” – Annie’s Newsletter, April 27th, 2018

EPISTEMIC SPILLOVERS
“I don’t like that plumber’s bumper sticker. I’m sure I can install my own toilet.”

Last week, I wrote about the difference between epistemic bubbles (lack of exposure to information from opposing-view sources) and echo chambers (identification with a tribe where out-group sources are not trusted).

So if you’re a Republican, you’re not going to trust what Democrats say about political issues and vice versa.

But what if you identify with one party and find out your doctor identifies with the opposing party?

It might lead you to distrust political information from that doctor, but it wouldn’t affect what you think about that doctor’s medical advice … right?

According to a new study, it might just do that.

Joseph Marks and three other behavioral researchers at London’s University College and Harvard law professor Cass Sunstein (@CassSunstein) showed that the trust of political in-group members and distrust of political out-group members spills over into areas that have nothing to do with politics.

They had participants learn about others’ political opinions and their level of expertise in an unrelated area. (In the experiment, it was the ability to categorize geometric shapes.)

"Epistemic Spillovers" - Annie’s Newsletter, April 27th, 2018

If you’re a Democrat, it would be good realize that your Republican plumber has valuable expertise on the unrelated issue of why your toilet isn’t working.

If you ignore that because you don’t like the plumber’s politics, that’s literally and figuratively a spillover that you’ll regret.


THE “MATTHEW EFFECT” IN SCIENCE GRANTS
Early winners have a permanent advantage over early non-winners –
It could mean winners are more skilled – 
(Or that an early lucky break snowballs)

A recent study published in the Proceedings of the National Academy of Sciences (PNAS) validated the Matthew effect in science funding.

The Matthew effect, coined by one of my decision-making heroes, Robert K. Merton, represents the idea that success makes it easier to achieve more success. That doesn’t mean the initial success was unearned – or that the subsequent success was unearned. But someone who gets an early lucky break seems to have a significant subsequent advantage over someone equally skilled who didn’t get that break.

One of the authors of the PNAS study, @Thjis_Bol, summarized the finding in science grants:
"Epistemic Spillovers" - Annie’s Newsletter, April 27th, 2018
We like to think that skill is responsible for how our lives turn out. But this study shows that luck can play an outsized role.

Notice the PNAS study looked at people just above and just below the funding threshold. When the decision is so close, it’s reasonable to assume that whether a person was awarded the grant was largely a matter of luck. It’s also reasonable to assume that the two applicants would be close in skill.

If the funding was a close call and the applicants were close in skill, one might assume that when we look at their careers many years later, they would still be close in funding.

But the chart shows that, eight years later, applicants originally very close had a massive funding gap, with those marginally more successful early now far ahead.

My guess as to what’s going on is that, once an applicant gets that initial funding, resulting takes hold. In subsequent funding decisions, the evaluators are likely influenced by prior funding decisions, using the award or lack of award as a strong signal as to qualification.

Eventually, those same two applicants – who initially seemed very similar – now look completely different. One applicant has a strong record of receiving funding, and the other has been repeatedly turned down.

The difference in these close-call situations reminded me of an item I mentioned a few weeks back, about resulting being alive and well in the NBA. That item was based on a study of lineup changes by NBA coaches, described in @BehScientist. Even though it’s pretty non-controversial to say that the difference between a one-point win and a one-point loss is well within the margin of error, so winning or losing by a point does not signal any real difference in decision making quality, coaches changed lineups much more often following the one-point loss.

When we can’t see inside every outcome, those close-call wins or losses can have decisive impacts. For coaches, a one-point loss triggers a lineup change where a one-point win does not.

In science grants, the marginal early advantage creates a snowballing advantage in funding over time.


WATCH OUT FOR FOBO (FEAR OF A BETTER OPTION)
Just as catchy an expression as FOMO – and even worse for decision making

Recently, I read a @Mental_Floss article about how discomfort with uncertainty causes analysis paralysis. Obviously, it’s a familiar concept in decision making, but the author, Kate Horowitz (@delight_monger), gave it a new acronym, FOBO.

Psychologists Jeffrey Hughes and Abigail Scholer, whose work is described in the article, shed more light on the decision processes of maximizers and satisficers. Maximizers try to get as close as possible to making the optimal decision, seeking certainty by exhaustively exploring every option, opinion, and possibility.

Satisficers try to make a decision that’s “good enough,” realizing the diminishing returns of continually seeking more data, the costs of indecision, and the uncertainty that makes it impossible to guarantee an outcome in the first place.

Embracing uncertainty is how you become a satisficer.

Maximizers are chasing a decision-making unicorn. They believe that certainty around a decision is attainable.

But certainty is impossible to attain. There’s always the possibility of an option that could work out better with any decision you make because perfect information is nearly always impossible to achieve, and it is very often too hard to perfectly estimate the influence of luck.

Trying to achieve this unachievable certainty can paralyze decision making. Once you accept that you can’t know everything and there are factors outside your control, you can be more comfortable committing to a decision that’s good enough.

Jeff Bezos said, in his 2017 letter to Amazon.com shareholders, “most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you’re probably being slow.”

Jeff Bezos is a satisficer.


CAN YOU WIN WITHOUT ACKNOWLEDGING MISTAKES?
Karl Popper suggests we can’t –
Phil Tetlock weighs in –
Perhaps it comes down to what you’re trying to “win”

For #MondayMotivation, I tweeted something I recently read from philosopher Karl Popper about the importance of acknowledging mistakes:

"Epistemic Spillovers" - Annie’s Newsletter, April 27th, 2018

Phil Tetlock (@PTetlock) saw my tweet and asked me how it applied to poker players:
"Epistemic Spillovers" - Annie’s Newsletter, April 27th, 2018

My answer:
"Epistemic Spillovers" - Annie’s Newsletter, April 27th, 2018


A SELF-DRIVEN CAR CAN KILL SOMEBODY …
But how often, compared to human-driven cars?

Last week, @Smerconish posted an article I wrote called, “Self-Driven to Distraction,” about the recent reaction to a pair of auto accidents involving self-driving cars.

The immediate reaction was to take the self-driving cars off the road, along with a lot of rhetoric suggesting that the technology must not be ready for the road, if such cars can be in fatal accidents.

This is a very good example of resulting (with some hindsight bias sprinkled in for good measure).

The general consensus seems to be that the tragic result means that it was a bad decision to have autonomous vehicles on the road in the first place.

What was missing from most of the coverage was a comparison to how safe, per mile driven, autonomous vehicles are to human-driven cars.

In other words, the analysis was mainly about the tragic result rather than the quality of the decision to have the cars on the road, which demands a look at the comparative safety.

Resulting has real and lasting effects on policy and innovation.

For a deeper dive, check out the piece.


THINKING IN BETS YOUTUBE PLAYLIST!
A wonderful, thoughtful gift from @MikeDariano

Mike Dariano created a Thinking in Bets YouTube playlist. Included in the playlist are scenes from WKRP in Cincinnati (“As God as my witness, I thought turkeys could fly”), The Princess Bride (The battle of wits with Vizzini), The Matrix (blue pill or red pill?), and Seinfeld (night guy vs. morning guy). It also has clips of Pete Carroll’s interview on Today (his “admission” that “it was the worst result of a call ever”), Dan Kahan on motivated reasoning, Charles Duhigg on how to break habits, the Bartman play, and lots more.

Thank you so much, Mike, for the thought and time that went into compiling the playlist.

(Plus it includes David Letterman interviewing Lauren Conrad, which must improve, at least slightly, the chance lettermanning will become a verb.)


THIS WEEK’S VISUAL ILLUSION

This week’s visual illusion is from @KendraCherry @VeryWell and VeryWellMind.com, The Kanizsa Triangle Illusion:

"Epistemic Spillovers" - Annie’s Newsletter, April 27th, 2018