Subscribe to Annie's Substack

“My-Side Bias, Part II” — Annie’s Newsletter, October 26, 2018

1. MY-SIDE BIAS, PART II - The counterintuitive effect of training on mitigating bias 2. WHAT FRESH HELL ...? - Midterm-election polls, pre-election expectations, and Nate Silver's sobering message 3. TRANSPARENCY MATTERS - But sometimes HOW MUCH it matters depends on whether you're looking OUT or looking IN 4. IS EIGHTY MILLION SUPPOSED TO BE A LOT? - Netflix seeds its narrative with a potentially meaningless statistic 5. TEST YOURSELF - Did these 21 social science experiments replicate?

MY-SIDE BIAS, PART II
The counterintuitive effect of training on mitigating bias

In last week’s newsletter, I wrote about a recent study on my-side bias, a piece of the motivated reasoning puzzle.

My-side bias affects the way we process information. If the subject matter involves a divisive political issue, our ability to perform reasoning tasks (like math or logic) is impaired, compared with how well we do those tasks with data on neutral subjects.

The study I discussed last week found that subjects’ ability to identify valid and invalid syllogisms – which involves internal logic – varied based on whether the assumptions fit or clashed with their views on abortion. (H/t Gurwinder, whose tweet initially brought this research to my attention).

But I want to focus on an additional tidbit I didn’t point out in last week’s newsletter that might be easy to overlook: People who are trained in logic are more susceptible to my-side bias.

Dan Kahan has shown in his work that being more numerate can make bias worse.

He found that for politically charged topics like gun control, people with greater mathematical skill were not better at rationally assessing the data. Whether good with statistics or not, people assess the data to support their prior political leanings.

As Kahan points out, “This pattern of polarization … does not abate among high-numeracy subjects. Indeed, it increases.”

The study on my-side bias found something similar:

“What’s more, this ‘my-side bias’ was actually greater among participants with prior experience or training in logic (the researchers aren’t sure why, but perhaps prior training in logic gave participants even greater confidence to accept syllogisms that supported their current views – whatever the reason, it shows again what a challenge it is for people to think objectively).”

We might imagine that if someone impressed on us the effect of a bias, we would be better at avoiding it.

We might imagine that people with training or experience (in statistics or logic) would be less affected by their own bias in assessing the validity of a claim.

But that imagining doesn’t hold up to reality.

It’s interesting to hypothesize that experience or training makes people more confident in their intuitive responses, or that it gives them the tools to more easily spin the data to fit their pre-existing beliefs.

Whether experience or training makes bias worse is of relatively little consequence, compared to the important finding that experience or training does not make it better.

One of the reasons why cognitive biases are so troubling: knowing about them isn’t enough to help us avoid them.

Since we can’t avoid bias altogether (or even get close), we have to be realistic about how much we can accomplish, and be dedicated to incremental, continuous improvement.


WHAT FRESH HELL …?
Midterm-election polls, pre-election expectations, and Nate Silver’s sobering message

In the September 7 newsletter, I wrote about a FiveThirtyEight article from Dhrumil Mehta and Janie Velencia concerning the polls in the Cruz-O’Rourke Senate race.

The July and August polls had O’Rourke behind Cruz, but by a smaller margin than previously expected. For people thinking it was time to bet even-money on a Democrat winning a Texas Senate seat, the analysts said, “We still say you should hold on to your chips.”

My earlier item focused on how a poll is not the same thing as a forecast. Mehta and Velencia made that point nicely, highlighting the significant uncertainty in polls conducted several months before an event.

Since early September, if you look at FiveThirtyEight’s aggregation of the polling in Texas, Cruz’s lead has held (or increased slightly).

Because we’re much closer to the election, that polling lead now translates into a greater probability that Cruz will win (79.3% on Oct. 25).

Washington Post media columnist Margaret Sullivan checked in with Nate Silver in mid-October about the midterm elections. I recommend you read the piece, “Nate Silver will make one firm prediction about the midterms. Most journalists won’t want to hear it.

Silver talks about the Texas Senate race and how the media overstated O’Rourke’s chances in July and August and but now are understating them.

“When Silver’s forecast had O’Rourke’s chances of upsetting Cruz at a 35 percent probability, the media chatter had it as almost a toss-up. Now that those chances have dropped to about 25 percent, the prevailing narrative has downgraded O’Rourke almost to dead-man-walking status. “

‘That’s not a night-and-day change, but that’s how it’s being talked about,’ Silver said.”

Silver is making an astute point about our weaknesses in evaluating probability.

In Thinking in Bets and other places, I’ve said that if we aren’t explicit about probability, we have a strong tendency to default to zero or 100%.

Silver offers a good reminder that it’s not really helpful if media forecasts default to zero or 100% (or 50-50).

We need to remember that the way we communicate what the polls are telling us has real consequences for how we process and make predictions about the world, not to mention whether we trust what the data is telling us.

Initially, the media overstated O’Rourke’s chances, which distorted the public’s view of how likely he was to win and what the margin of victory in the race might be.

You can easily imagine the stories that would result from that overstatement in the case of a substantial Cruz victory: The polls got it wrong!

Now, they are understating O’Rourke’s chances of an upset. Again, this sets up an opportunity to blame the polls.

Neither situation is an accurate representation of how the polls reflect the probabilities, and this deepens our mistrust of data.

But the problem isn’t the data – it’s the way we’re miscommunicating or misunderstanding the data.


TRANSPARENCY MATTERS …
… But sometimes HOW MUCH it matters depends on whether you’re looking OUT or looking IN

Ted Seides wrote a thought-provoking article pointing out situations where seeking transparency may not be to our advantage.

He examines concepts of “edge” and “transparency.” His primary examples are Billy Beane (Oakland A’s GM and now VP of Baseball Operations) and Jim Simons (Renaissance Medallion Fund), but his point gives us something to consider in other situations.

Beane’s Moneyball approach made the small-spending Oakland A’s a contender and a powerhouse from 1999-2006, and he did it again in 2012-2014.

(Plus he’s entitled to go around for the rest of his life and say, “You never heard of me? Brad Pitt played me in a gigantic hit movie,” which I imagine he’d never do but he earned.)

But, as Seides points out, Beane acknowledged at a Goldman Sachs conference in 2012 that edges don’t last for long:

“… data and analytics in baseball were becoming ubiquitous, and all his peers would soon be on equal footing. He predicted that the teams that make the playoffs five years hence would be the ones with the highest payroll, allowing them to pay for the players everyone agreed were the best in an efficient market for talent.”

Indeed, his analytics-focused approach has spread through baseball (and all professional sports). He got a little break on his 5-year prediction due to industry stubbornness, but 9 of the 10 MLB playoff teams in 2018 had big payrolls andstrong analytics departments.

The 10th team, the one without the big payroll, was the A’s. The 2018 A’s had the 3rd-lowest payroll in baseball. They won 97 games (+21 from FiveThirtyEight’s pre-season forecast) and overcame an 11.5-game deficit in June to nearly upend the defending-champion Houston Astros.

Beane has clearly figured something else out, but he hasn’t shared it so no one else has caught on.

Building and maintaining an edge in investing is similarly tricky. If something succeeds, others usually figure it out and copy it.

This is where the conflict with transparency kicks in. Investment managers have to disclose enough of their secret sauce to attract clients and inspire trust, but guard what they’re doing to keep a lead on the competition.

Seides illustrated the situation with this schematic:

Unless you have a structural advantage – he gives the example of Vanguard, which operates on a scale that others can’t replicate, so they don’t have to be opaque about their edge – transparency can reduce your edge.

It’s a paradox because if you value transparency (and/or your constituency values transparency), you’re potentially sacrificing what makes you successful.

“Transparency,” which can mean a lot of things, is one of those concepts that seems unquestionably positive. Why wouldn’t we always favor transparency?

Transparency gives you two things: investors can see your process and assess whether you’re winning, and it makes it possible for people to help you refine your process.

The problem is that transparency also makes it easier for smart people to replicate your process, costing you your edge.

That’s the paradox.


IS EIGHTY MILLION SUPPOSED TO BE A LOT?
Netflix seeds its narrative with a potentially meaningless statistic

Brian Stelter, CNN’s Chief Media Correspondent and Anchor, produces a wonderful daily newsletter, “Reliable Sources.”

His October 16 edition included a brief note about the Washington Post column about Nate Silver, which kept me from overlooking that item (see above) in my newsletter this week.

That same edition of “Reliable Sources” included an email from CNN media reporter Frank Pallotta, about Netflix’s glowing investor letter, released that day.

One of the highlights of the letter was that 80 million subscribers watched one of their “Summer of Love” rom-coms. Entertainment-industry stories led with that number and most news and financial sites included it.

Pallotta made this astute observation:

“Wow, 80 million! Ok, here’s the problem. I have no earthly idea what those numbers actually mean because the company gives zero context. Listen, obviously Netflix is successful (it’s Netflix!), but do these numbers mean 80 million people watched one or more movie all the way through? Or is it 30 minutes? Or did they fall asleep and it just so happened to play the next film? We don’t know — and Netflix likes it that way.”

Maybe Netflix elsewhere provided some reference-group or meaningful comparable data. Or perhaps it addressed the issue subsequently – because six days later it announced it needs to raise another $2 billion in debt for, among other reasons, “content acquisitions and production costs.”

As Pallotta also noted, we all love Netflix. But it’s smart to question whether (as we would with any decision) the forming narrative stands up to scrutiny.


TEST YOURSELF 
Did these 21 social science experiments replicate?

80,000 Hours and ClearerThinking.org put together a quiz based on a recent set of attempts to replicate the findings of social science experiments.

On September 7 – yes, this is my second reference to that edition of the newsletter– I wrote about the recent attempts to replicate 21 studies appearing in Nature and Science between 2010 and 2015. They found a significant effect in the same direction as the original study in 13 of 21. In addition, the effect size in replications averaged 50% of the original effect size.

The quiz tests your ability to guess which of those 21 studies replicated.

As you take the quiz, pay attention to how much your intuition drives your predictions – how surprising do you find the result? How much do you want the result to replicate?

In addition, if you’re not otherwise familiar with these findings or how they fared in replication, the format provides brief summaries and you can check your answers against the replication results.

The site also summarizes the overall lessons of the replication attempts, including the promising results of prediction markets and expert surveys.

The work of 80,000 Hours seems like a good hack for keeping up with social science findings that are potentially subject to broad application.

“Because not all published research is reliable, figuring out what’s true in the social science can be hard. That’s why we do in-depth reviews of complex issues where you can’t believe everything you read, so you don’t have to. For example we reviewed over 60 studies about what makes people like their jobs, read everything there was to read about whether money makes you happy, and found the conventional wisdom on stress is significantly wrong.”

H/t Joe Sweeney for bringing the quiz to my attention with his tweet.


THIS WEEK’S ILLUSION – MOVING RINGS

From Akiyoshi Kitaoka: