Subscribe to Annie's Substack

“Mid-Term Election Edition”— Annie’s Newsletter, November 9, 2018

CONSTRUCTIVE IDEAS ON PROBABILITIES FROM PRE-ELECTION COVERAGE Or ... as a friend called it, "What happened after Annie lost her shit on Twitter" SPILLOVER EFFECTS FROM FORECASTING TOURNAMENTS Can forecasting depolarize our political conversation? WEIGHT & RATE: GREAT TECHNIQUE FOR CRITICAL THINKING Vital tool (with a cool name), via Joseph Sweeney

CONSTRUCTIVE IDEAS ON PROBABILITIES FROM PRE-ELECTION COVERAGE

Or … as a friend called it, “What happened after Annie lost her shit on Twitter”

We struggle to understand and properly communicate the differences between polls, prediction markets, and forecasts.

This difficulty with thinking probabilistically, aided and abetted by pundits, distorts our expectations of how elections might turn out. This in turn affects our behavior going into the election and our ability to process the results once they come in.

When a favorite loses, we declare the pollsters wrong, rejecting the probabilistic nature of the outcome and eroding our faith in expertise.

This, of course, has implications beyond elections, into decision making in general. What are decisions, after all, but predictions and forecasts informed by our beliefs?

On the Sunday before the midterm elections, I tweeted my frustration with our struggle and how frequently it’s been displayed:

Next thing I know, I’m 20-tweets-deep, so I closed by saying

A friend called me and was blunt: “You really lost your shit today on Twitter.”

You can read the thread and the various discussions that it started, but I want to summarize and preserve the good ideas that came from this:

1. LET’S REMEMBER THAT THE BAD GUY IS CERTAINTY

That might sound kind of weird. Isn’t certainty what we are all striving for?

No, what we are striving for is to build the most accurate model of the future. The future is uncertain. Therefore, an accurate model of the future can’t be certain of the outcome – it can only approach certainty about the range of outcomes and their likelihoods. Just because I know a coin will flip heads 50% of the time doesn’t mean that I know what it will flip on the next try.

The problem is that this makes us really uncomfortable. As I pointed out in the thread, “Certainty is what people want.”

We want to know for sure what is true and what is not. We want someone to tell us how things will turn out. We like black and white. We try to make the world fit to what we want, grasping onto more certainty than the world actually offers us.

That’s how a forecast of 75% becomes a “sure thing.” This doesn’t happen because we’re lazy or stubborn; we’re built this way.

And the majority of pundits seem happy to feed the certainty monster.

We’re all flooded with information and trying to give it some sense and order. Telling a good story and hearing a good story are part of being human.

When was the last time you heard a pundit say, “It’s complicated. I really don’t know”?

It’s not just that the pundit believes they know with more certainty than they can – it’s also not what we want to hear.

The pundits are responding to the signals we give them about the way we want our information delivered. They can do better, but some of that has to be connected to us wanting more nuanced reporting.


2. RAISING THE STAKES HELPS US UNDERSTAND THE REALITY OF UNCERTAINTY

I suggested this in tweet #19:

Joshua B. Miller contributed to the thread by sharing this illustration:

If you played Russian Roulette with one bullet in seven chambers, that’s the same as the chance the Democrats would fail to flip the House.

Would you point the gun at your head and pull the trigger because you counted seven chambers and one bullet and believed six-out-of-seven was a sure thing?

That’s what 15% means. And when your life is on the line that tendency for 85% to feel like 100% goes away.

Thinking in bets forces the uncertainty into the foreground. And that’s a good thing.

3. REPETITION AND FAMILIARITY CAN HELP US UNDERSTAND THE REALITY OF UNCERTAINTY

Kip made the suggestion of offering familiar sports analogies to get people to understand uncertainty.

In the same vein, I think one of the reasons poker players are pretty good at understanding these probabilities is because they play them out over and over again. When I was playing full-time, I lost enough hands where I was 82% to win that I know that 18% isn’t zero. I can feel it.

Prediction markets like PredictIt offer a good way to bet on events to train probabilistic thinking. Even if you don’t want to bet, it’s good to follow, offering a view into what a two-sided betting market thinks about the chances different events will occur.

Not surprisingly, that view is often different from the un-nuanced narrative we frequently receive from pundits.

4. IN PAST NEWSLETTERS, I’VE COVERED THE DIFFERENCES BETWEEN POLLS AND FORECASTS AND OUR TROUBLE DEALING WITH THAT

SUMMING UP

I enjoyed the joke Robert Wiblin tweeted, along with FiveThirtyEight’s chart for that day (Nov. 3) showing the Democrats’ chance of controlling the House at 7 in 9 (77.6%) and the chance of the Republicans’ keeping control at 2 in 9 (22.4%):

(Some people didn’t understand it was a joke.)

Dinesh D’Souza took Nate Silver’s interview on TheHill (in which he tried to communicate that 77% or 85% isn’t 100%, and 23% or 15% sometimes happens) as Silver’s confession that Democrats had fallen to 50-50 to take the House.

D’Souza’s tweet pretty much sums up the reason for my tweetstorm.


SPILLOVER EFFECTS FROM FORECASTING TOURNAMENTS

Can forecasting depolarize our political conversation?

Phil Tetlock recently published a paper, along with Barbara Mellers and Hal Arkes, examining the potential benefits of forecasting tournaments.

Forecasting tournaments are organized events in which individuals, teams, or algorithms compete in predicting future events to see who is the best calibrated. The events could be elections, entertainment awards, or other public news items (like whether the Fed will increase interest rates at its next meeting, or whether Mueller will still be Special Prosecutor on Jan. 1, 2019, or whether the Dow Jones Industrial Average on Jan. 1 will be higher than 26,000, or whether A Star is Born will win the Best Picture Oscar).

Participating in forecasting tournaments is a form of thinking in bets. The competition doesn’t reward you for being overly certain or for proving that the things you believe are right. It rewards you for accurately modeling the future, which requires exploring in depth why you might be wrong.

“We view tournaments as belonging to a broader class of psychological inductions that increase epistemic humility and that include asking people to explore alternative perspectives, probing the depth of their cause-effect understanding and holding them accountable to audiences with difficult-to-guess views.”

The paper also considers the benefits of using forecasting tournaments as a bridge for exchanging views on polarized issues.

When we engage in that kind of thinking and communication, we become less black-and-white about our view of the world. To be a good forecaster, we can’t treat 15% as zero, or 75% as 100%.

To be a good forecaster, we have to think about things from other people’s perspective.

Being a good forecaster encourages us to think, “Why am I wrong?”, instead of “Why am I right?”

Being a good forecaster puts us in a frame of asking ourselves, “How sure am I?”, instead of “Am I sure?”

That kind of mindset moderates us. It can make us better decision-makers and, maybe, more tolerant in a polarized world.


WEIGHT & RATE: GREAT TECHNIQUE FOR CRITICAL THINKING

Vital tool (with a cool name), via Joseph Sweeney

Joe Sweeney, the new Executive Director of How I Decide (the non-profit I cofounded to build the field of decision education), just wrote a fabulous piece in Medium, “Beyond Pros and Cons – Start Teaching the Weight and Rate Method.”

The conventional advice, particularly for personal decisions, is to make a list of pros and cons. It might be a simple way to approach a decision, but implicit in a pro/con list is that each pro and con has the same weight, that we value each the same.

Fundamental to decision-making is the understanding our values and goals. A list of pros and cons doesn’t reveal those to us well.

For example, if we’re trying to choose between two restaurants, the first restaurant could have six positives and one negative, and the second have four positives and three negatives. Does that automatically make the first restaurant the better choice?

What if the one negative of the first restaurant is that it’s really noisy, and one positive of the second is that it’s quiet and private?

If the meal’s primary purpose is to conduct a discussion on an important issue, positives or negatives about service-speed and even food-quality would be secondary. The noise level would carry the most weight.

A good decision tool helps us identify our goals and values. That’s where the weight-and-rate method comes in as a big improvement on a list of pros and cons:

“The Weight and Rate Method … goes by several names, including ‘table’, decision matrix, or ‘decision worksheet’ but it is the method, not the name, that matters. The method includes framing the decision clearly and concisely, reflecting on goals and values, creating alternatives, listing criteria, imagining consequences, etc.”

You should read the article, because Sweeney develops this as a 10-part process, and applies it to a travel choice his cousin faced during senior-year in college. Like any tool, mastering its use takes practice.

He points out that the exercise – as it happened in his cousin’s case – sometimes provides a surprising result.

As we get comfortable using the weight-and-rate method, this can act as a crucial reminder of the importance of values and goals to decisions. Those things can vary from one decision to another – and from one person to another.

Before rushing to judgment about someone else’s decision that we disagree with, we should recognize they may not share our goals and values on that decision.

Coming back to the mid-term election and our polarized political environment, we can see how helpful this reminder can be. Instead of ignoring the reasoning behind someone else’s choice, we can identify the values that led them to a different choice.

This can help us be more civil about our differences. It can also help us develop appeals to their values that can open up discussion.


THIS WEEK’S VISUAL ILLUSION

Shepard’s Tables

Via the Visme blog, a Visual Learning Center.

The two tables appear to be of different size and shape but are, in fact, identical. (You can check the blog for the explanation and an animated illustration.)