Subscribe to Annie's Substack

“Why A Good Truth Is Hard To Find (Part I)” — Annie’s Newsletter, November 2, 2018

— Why A Good Truth Is Hard to Find (Part I)
 Processing fluency: We like and believe things oft repeated — Why A Good Truth Is Hard to Find (Part II)
 The "confidence heuristic": Make it simple, sound certain, and we'll believe — The Hidden Tribes Study
 New data on why we're polarized and where to go from here
 The trouble surmounting the intractable minority — Algorithms To The Rescue? 
For fake-news detection, AI finds what we can already spot — Morality To The Rescue of Algorithms?
 If you're concerned about self-driving cars making moral choices, this should make you feel better: Humans don't even agree on those choices

WHY A GOOD TRUTH IS HARD TO FIND (PART I)

Processing fluency: We like and believe things oft repeated

Bob Nease offers a great summary of processing fluency in this piece in Fast Company. Processing fluency refers to the ease with which we understand a concept (or feel like we understand).

Nease describes how processing fluency drives our perception of ideas and decisions:

“Information that’s easier to process is viewed positively in almost every way. Cognitive scientists refer to this ease as ‘processing fluency’…. The greater something’s ‘fluency,’ the more we tend to like it, the less risky we judge it, the more popular and prevalent we believe it is, and the easier we think it is to do.”

If we hear something frequently repeated, we’re more likely to think it’s true. The simpler a concept, the truer it feels. The easier a concept is to remember, the truer it feels.

Processing fluency creates “truthiness.”

Because we have more processing fluency around status quo ideas, Nease makes the astute observation that processing fluency inhibits innovation.

We are less likely to question the assumptions behind familiar choices, making us less likely to explore new and better ways of thinking. In addition, innovative choices are by their nature less familiar and less transparent, which reduces processing fluency.

And when we feel like we don’t understand a decision well, we are more likely to take the shortcut of working backward from a bad outcome to determine if the decision was any good.

The more unfamiliar a choice, the more likely it is that when things don’t work out, people will result.

But when we fail with the familiar choice, we are more likely to allow for luck as the explanation.

We don’t have to look any further for this than my go-to example from the beginning of Thinking in Bets: Pete Carroll’s pass-call on the last play of Super Bowl XLIX.

The status quo choice is to hand the ball off to Marshawn Lynch. There’s a lot of fluency around that so it feels right; it feels like we know it is a good call.

If Seattle had run the ball and New England had stopped them, Carroll would not have been attacked in the same way. Pundits would have talked about the success of New England’s goal-line defense holding rather than Carroll’s failure.

When he chose to pass, a play the fans and media didn’t have nearly as much fluency around, the pundits resulted on Pete Carroll, blaming him for the outcome.

We are much more likely to choose paths where we have more processing fluency, not just because those paths feel more true and likeable, but because we are less likely to get blamed following a bad outcome.

Choosing the road more taken protects us against the downside.

That’s an innovation killer.


WHY A GOOD TRUTH IS HARD TO FIND (PART II)

The “confidence heuristic”: Make it simple, sound certain, and we’ll believe

Here’s a problem related to processing fluency, and it’s easy to recognize in our politics.

When we’re looking to someone to tell us the truth, figuring out what’s true and what’s not is difficult. We seek out certainty, and it’s easy to confuse confidence with certainty.

Cass Sunstein wrote a short, typically insightful piece on Bloombergexplaining the role of the “confidence heuristic” in such behavior:

“When people express beliefs to one another, their level of confidence usually reflects how certain they are. It tells us how much information they have. When we are listening to others, we are more likely to be persuaded by people who seem really confident.”

We use confidence as a signal for how certain someone is and how much information they have.

That’s why we’re more likely to be persuaded by people who seem really confident. When people are confident, we think they are certain and that makes us feel better because we don’t like uncertainty.

This is particularly problematic in politics, because one of the things that we know (from the work on tribalism) is that we want our tribe to give us epistemic closure, to tell us what is true and what is not.

In a world where our beliefs and predictions are inherently uncertain because of luck and hidden information, the tribe closes that uncertainty gap for us.

So when politicians express things with great confidence that actually fulfills the role we expect from a tribal leader, to tell us what to believe.

In addition, there’s the processing fluency problem. When concepts are simple, we process them more easily.

Sunstein references the research of Phil Tetlock on that:

“Most people respond more enthusiastically to simple, clear rhetoric from leaders, downplaying tradeoffs, than to complex rhetoric that points to competing considerations and that can easily be seen as a sign of weakness.”

The problem comes from the reality that most public policy decisions are quite complex. But when a politician presents the decisions as complex, we’re less likely to feel the truthiness of such explanations, as compared with simple explanations.

For a politician to present us with an accurate representation of the world, they would have to say, “these policy decisions are complex” and “the outcomes of these decisions are uncertain.” They would have to say, “I’m not sure” a lot.

That doesn’t sell.

So, instead, the confidence heuristic and processing fluency combine to make politicians express simple concepts to us with great confidence.


THE HIDDEN TRIBES STUDY

New data on why we’re polarized and where to go from here
The trouble surmounting the intractable minority

A recent survey, “Hidden Tribes of America,” is one of the largest-ever studies on polarization in America.

A key finding is that Americans are not divided into two polarized, fundamentally-opposed tribes. As this chart from the study illustrates, the results suggest we are divided into seven distinct groups:

According to the study, 8% of the population is on the extreme left and 6% of the population is on the extreme right. That is 14% on the edges. Another 19% of the right seems to stick with the 6% on the extreme right.

So where does that leave the rest of us?

It leaves 67% of us in “the exhausted majority.”

Looking at the data, you can see that the people at the edges express their views as black-and-white, yes-or-no, with not much uncertainty about it.

The 67% in the middle see the world in a more nuanced way. They don’t necessarily feel people on their side are always right or that people on the other side are always wrong. They see the world as more complicated:

“The majority of Americans, the Exhausted Majority, are frustrated and fed up with tribalism. They want to return to the mutual good faith and collaborative spirit that characterize a healthy democracy…. The vast majority of Americans – three out of four – believe our differences are not so great that we cannot come together.”

So this raises the question: given that the vast majority doesn’t like these extreme views, why does it seem like extreme views dominate the conversation and drive a lot of the policy?

Some of that has to do with Cass Sunstein’s (and Phil Tetlock’s) point about simple views being easier to communicate and metabolize by people receiving such messages.

The other reason the exhausted majority hasn’t defused the polarization problem is because an intransigent minority can wield remarkable power.

Nassim Nicholas Taleb devoted a chapter of Skin in the Game to showing how a large majority can end up being ruled by a small minority. (An earlier version of the chapter appeared as a Mediumarticle under a similar title, “The Most Intolerant Wins: The Dictatorship of the Small Minority.”)

Someone who views the world as black-and-white, good vs. evil, is not going to budge on an issue they consider important.

This means they are much more likely to get their way because part of the ethic of the exhausted majority is that they will budge.

Taleb illustrates this with an example about kosher lemonade. If one person in a household keeps kosher, the rest of the household will be drinking kosher lemonade because the kosher person will not budge while the non-kosher people are fine drinking kosher lemonade.

If the members of that household go to a picnic, the picnic will be serving kosher lemonade because, again, most people don’t care at all, but it’s very important to a few people. (That’s why the grocery store in that neighborhood will eventually sell kosher lemonade.)

The same thing happens in our politics.

In addition, the media tends to amplify the extreme views. Those messages feel direr. Those messages activate our fight-or-flight response.

When a message activates the amygdala (the fear-emergency part of our brain), we pay attention. Among the people at the extremes who view things as black-and-white, for us or against us, everything is an emergency.

The media covers those views, because they beg for attention – and we pay attention.

There just aren’t that many headlines that say, “ISSUE COMPLICATED! REQUIRES NUANCED UNDERSTANDING! FLEXIBILITY AND COMPROMISE A MUST!”


ALGORITHMS TO THE RESCUE?

For fake-news detection, AI finds what we can already spot

I closed the October 19 newsletter with an item about how the first step to deal with social media’s fake-news problem – targeting and closing individual accounts – wasn’t much of a start.

Even if you didn’t see that item, you can imagine (from the tone of this edition of the newsletter) why I believe that. Fundamentally, we can’t keep our attention away from messages that are simple, confident, repeated, extreme, etc.

For fake-newsers, that’s the playbook. For the enterprises serving as conduits, that’s the business model. And for us, that’s how our brains work.

Phase two, according to Facebook CEO Mark Zuckerberg and others, is using artificial intelligence (AI) to spot and combat fake news. In Zuckerberg’s congressional testimony earlier this year, he put a target of 5-10 years on Facebook’s AI detection programs.

In a brilliant op-ed in the New York Times by Gary Marcus and Ernest Davis, “No, A.I. Won’t Solve the Fake News Problem,” they explain that AI (in its current state) is really only good at spotting fake-news items that humans can already easily spot.

And there is ample reason to be skeptical that this will change anytime soon, or within 5-10 years as Zuckerberg predicts.

There’s a very good illustration in the piece of a fake-news item on a far-right website about the Boy Scouts:

“The Boy Scouts have decided to accept people who identify as gay and lesbian among their ranks. And girls are welcome now, too, into the iconic organization, which has renamed itself Scouts BSA. So what’s next? A mandate that condoms be made available to ‘all participants of its global gathering.'”

What makes the story false is non-obvious. There are no babies born with three heads or Godzillas roaming the streets.

Rather, to understand that this piece is fake news, you must understand the context and bring real-world knowledge to bear to understand the piece. You also must understand the subtle expression of causality in the piece.

This is exactly the kind of thing AI is bad at.

The piece presents one fact (gay, lesbian, and female membership) and ties it to another (condom availability) with the phrase, “So what’s next?”

The implied causality is the fake part.

The condom policy had nothing to do with the expanded membership. That policy originated at least as long ago as 1992, long predating admission of girls, gays, and lesbians.

AI can’t figure that out. You have to know about the world and the state of facts, prior to and after this particular policy to realize that they’re presenting it as causal, but only by implication.

In situations involving the subtleties of language and usage, common sense, and real-world knowledge, this is where AI isn’t very helpful, and won’t be for a long time.

When it comes to obvious fake news, AI can spot fake news, but, by the way, so can human beings.

Marcus and Davis conclude that it’s unreasonable to expect automated detection of fake news on Zuckerberg’s timetable:

“Decades from now, it may be possible to automate the detection of fake news. But doing so would require a number of major advances in AI, taking us far beyond what has so far been invented.”

AI is a “black box” to most of us, so we’re not in a position to make a very good guess on whether AI can help us significantly with fake news now vs. in five years vs. in 25 years.

But there is a lot of reason to be skeptical.


MORALITY TO THE RESCUE OF ALGORITHMS?

If you’re concerned about self-driving cars making moral choices, 
This should make you feel better: Humans don’t even agree on those choices

In the October 12 newsletter, I brought up the moral dilemma of the trolley problem, summarizing recent research from Bertram Malle and colleagues finding that we have different expectations for human decision-makers than for robots.

Do we redirect the trolley, leading to the death of one person? Or do nothing, in which case the trolley kills five people?

This is going to be a real problem that will come up all the time with autonomous vehicles. These are just a few examples:

  • Does the car swerve to save a pedestrian, but endanger the lives of the passengers of the vehicle?
  • What if the car has to swerve to save a pedestrian, but that increases the probability of hitting a bus?
  • Does the car swerve to avoid an elderly couple if it increases the likelihood of hitting a young mother walking her baby in a stroller?
  • To what degree does it matter if one of those pairs is walking on the sidewalk and the other is crossing the street against the traffic light and outside the crosswalk?

These decisions are complicated because we don’t have unanimous agreement on the trolley problem.

According to another survey, recently reported in Nature, we vary significantly, from country to country, about which life-or-death choices we morally favor.

The survey involved 2.3 million people from around the world, presenting 13 traffic scenarios. Someone would die, but the respondent’s choice determined which victim(s): younger-older, rich-poor, fewer-more, etc.

Dividing the data into three geographic groups (Western, Eastern, Southern), they found the following differences:

These tradeoffs are tricky. Just like the trolley problem itself, we don’t have agreement about what’s right or permissible when either choice creates a risk of danger.


THIS WEEK’S ILLUSION:

Naturally occurring illusion at the Exploratorium in San Francisco

All the air handlers are the same color:



I MADE SOME CHANGES TO MY WEBSITE

Please check out my improved version of AnnieDuke.com!

I overhauled AnnieDuke.com. In addition to updating it to include a lot of new information, I tried to simplify the presentation. For example, it should be easier to access prior editions of the newsletter, or find podcasts on which I’ve appeared.

It’s also easier to contact me through the website. To that end, I hope you’ll send me a note if you have any feedback on the new site design.

And, as always, please let me know (through the website or on Twitter) your questions, opinions, and feedback on anything. One of the things I like best about working in the decision-strategy business is the opportunities I get to hear from – and learn from – others.

I hope the new design makes that easier!