Subscribe to Annie's Substack

“Tribalism on Steroids” — Annie’s Newsletter, October 12, 2018

TRIBALISM ON STEROIDS
When partisanship increases, it leads to hatred of the opposing tribe – and we’re there. Now.

Jon Haidt recently tweeted that our partisan hatred is increasing, with alarming consequences:

(The chart is from a book that came out just this week, Prius or Pickup?: How the Answers to Four Simple Questions Explain America’s Great Divide, by Marc Hetherington and Jonathan Weiler.)

This demonstrates a dangerous element of tribalism. Political partisanship isn’t just about the utility of having a group – safety, conformity, kinship, access to beliefs with a built-in consensus.

Increasingly, this kind of tribalism includes hating the other tribe. That hatred makes it easier to believe that the ends justify the means.

This turns partisanship into a game of chicken. In theory, when you’re the outgroup, you see these problems, complain about how the ingroup doesn’t care about what’s right or wrong, and declare when you get into power, you’ll repair it.

But that doesn’t seem to be how it works in today’s political environment. Not in rhetoric or in action. Not as the outgroup or the ingroup.

When you’re the outgroup and you see the ingroup pushing its agenda by means that you disagree with to ends that you don’t want, you think of that as cheating. It’s not honest. It’s not bipartisan. It sacrifices important values just to get an outcome.

With more polarization, the outgroup takes more of an if you can’t beat ‘em, join ‘em attitude.

Tribalism is creating a deep and widening divide. The result may be a battle that spirals out of control of who can get what they want no matter how they achieve their goals.


AT HOME ON THE RANGE:
Using a slider to communicate probabilities and make nuanced distinctions
Can it also find common ground on polarized issues?

Eric Saund, in this great Medium.com piece, “Did Kavanaugh Do It?”, offers a way that we might possibly communicate with each other more constructively on polarizing issues.

As an added bonus, his suggestion might help us understand the uncertainty in our own beliefs (always a good thing!).

He suggests taking a more probabilistic approach to the Kavanaugh-Ford debate by using a Bayesian Calculator.

Saund does a wonderful job explaining how our prior beliefs influence our conclusions about the evidence and how using a Bayesian slider can help us disentangle these and reason better in a causal chain.

He provides a helpful table to represent one such causal chain that has to do with Ford’s belief state:

Most people don’t reason in this way. Rather, as Saund points out, “Many times we decide what conclusion we want to reach, then adjust our arguments to fit them.”

That’s a concise definition of motivated reasoning.

I encourage you to play with the actual slider. The people that I know who’ve done it tend to moderate their beliefs:

For those who are nearly certain that it’s close to 0% that Kavanaugh assaulted Ford, playing with the slider reveals that, given their own assumptions, the probability is actually much higher than they thought.

Likewise, for those who are nearly 100% certain that Kavanaugh did assault Ford, using the slider reveals that, given their own assumptions, they are actually much less than 100% certain.

By forcing us to think probabilistically, the slider moves us toward the middle, away from 0% and 100%, toward a more moderate view.

It reminds us that our beliefs are rarely certain.

And that’s a good thing.

[Note that none of this says whether the standard in the confirmation process should be “beyond a reasonable doubt” or any other percentage or range. That’s a separate can of worms.]

The slider forces us to think probabilistically about the hearings, identifying our prior assumptions and how the conclusion we want to draw might be muddying our ability to see through to the evidence.

It forces us to think about quantifying questions about Ford’s belief state, for example:

“Assume Kavanaugh in fact did assault Ford. What is the probability she believes it happened?” and “Assume Kavanaugh in fact did not assault Ford. What is the probability she would otherwise believe he did?”

It forces us to quantify our answers to questions about Kavanaugh’s belief state and aspects of his testimony:

“Probability Kavanaugh assaulted Ford based on Ford’s testimony alone.”

“Probability Kavanaugh assaulted Ford based on Kavanaugh’s testimony being angry in tone”

“Probability Kavanaugh assaulted Ford based on Kavanaugh’s testimony had it been calm in tone.”

 

Using the slider reveals where those on opposite sides might agree and disagree, clarifying how to engage on the issue.

With no slider, we don’t even know what we’re arguing about. That makes it extremely difficult for any argument to be constructive.

As Phil Tetlock pointed out in a tweet that got me thinking about this method of quantifying beliefs, the slider communicates a more nuanced decision that forces us to think probabilistically.

That may not make us feel better about the outcome, but it’s a more constructive way to think about a polarizing topic.


SOMETHING CONSTRUCTIVE ABOUT POLARIZATION?
It reminds us that we’re better at spotting flaws in others’ reasoning than our own

Rolf Degen tweeted about research that we’re better at finding flaws in the opposing party’s reasoning than our own:

This goes along nicely with a Twitter thread I mentioned in the newsletter two weeks ago.

Stanford law professor and social psychologist Robert MacCoun referred to research where people were shown their own responses to logic problems but thought those responses belonged to other people.

Turns out people are better at spotting holes in their reasoning if they think they’re seeing other people’s responses.

Apparently, the same is true when the comparison is the reasoning of our own political tribe vs. the reasoning of another tribe. We’re more critical evaluators when we’re looking at reasoning from the other tribe.

This is why when Trump supporters say, “If Obama had done what Trump did, you would love it!” or Obama supporters say, “If Obama did what Trump did, you would be going nuts!” it doesn’t really work as a rhetorical trick.

We aren’t good at separating the message from the messenger. We’re just not good universalists.

As I said in Thinking in Bets, it’s much easier to see bias in other people than in ourselves. This is true whether you’re looking at reasoning on an individual or tribal level.

This is why a well-formed decision group can be so useful in helping us make better decisions. We’re good at seeing flaws in others but not in ourselves.

If you don’t get people to help find your flaws in reasoning, how are you supposed to overcome this?

We need people around us with diverse views, interacting with us where the goal is not to confirm our beliefs or convince others. Rather, the goal should be to point out the flaws in each other’s argumentation because we are pretty miserable at doing that for ourselves.


THE TROLLEY PROBLEM
Human vs. robot edition: Morality, blame, and killing outcomes

The trolley problem is a famous thought experiment in human ethics first introduced by Phillippa Foot.

It goes like this: A trolley is barreling toward five people. The only way to save them is to pull a lever, diverting the trolley to a side track where one person is standing. So, pulling the lever will save the five people but kill one person on the other track.

That’s you standing next to the lever. What do you do?

A utilitarian would obviously pull the lever, sacrificing the one person to save five.

As you likely already know, not everyone is a utilitarian. In fact, a full 30% think it is not even permissible to pull the lever and about half of people think it would be morally wrong to do so.

But what about robots who pull the lever?

That’s the question that Bertram Malle and colleagues asked in a study that looked at the differences in how we expect humans and robots to act when presented with the trolley problem.

What do people think is the moral choice for a robot?

Are robots expected to pull the lever?

Do we blame them for the deaths if they do?

Do we blame them for the deaths if they don’t?

Malle found significant differences in how we view permissibility, moral wrongness, and blame by humans versus robots.


Humans blame humans much more for the outcome resulting from pulling the lever (one death) than for the outcome resulting from letting the trolley continue on its path (five deaths).

But they blame robots equally for either outcome.

And when it comes to moral judgment, there is a very big difference. 49% of humans think that if a human pulls the lever it is morally wrong; only 15% think that a human who lets the trolley stay on course is morally wrong.

When a robot pulls the lever, though, only 13% thought that decision was morally wrong. When the robot lets the trolley stay on course, 30% thought it was morally wrong.

Those are pretty striking differences.

Humans feel others will hold them responsible for taking action and diverting the train. In other words, it’s better let the train go down the track it’s already on.

We’re much more likely to blame a human for diverting the trolley than for doing nothing.

But when it comes to a robot, we’re equally likely to blame them for the deaths following the robot’s decision to divert the trolley or not.

This is more than just about robots and runaway trolleys.

In general, we blame others for bad outcomes that result from changing course. (For details, see the reaction of Seahawks fans when Pete Carroll chose that pass play.) We blame others and we know others will blame us, so we don’t pull levers when maybe we should.

When it comes to robots, we blame them equally for bad outcomes: those following action or inaction. It doesn’t so much matter whether one person or five people died. Someone dies and the robot takes the heat either way.

That has implications for how we might accept bad outcomes for things like autonomous vehicles.

An autonomous car veers to save the pedestrian and kill the passenger? Blame the car.

Save the passenger but kill the pedestrian? Blame the car.

This is serious stuff as we think about not just the ethics of AI but also whether people will accept the consequences of these kinds of trade-offs.

I don’t think all technology/robots/algorithms are necessarily improvements over humans. But if we want any benefits of technology potentially improving our decisions, we can’t discourage such technology by blaming it for outcomes in which a human would escape blame.

H/t David Foulke for engaging with me on this issue and bringing my attention to the Malle paper.


BACK ON THE TROLLEY
Jodi Beggs makes a great point about public vs. private choice

When I posted on Twitter about the human vs. robot version of the trolley problem, Jodi Beggs replied with an excellent observation:

She’s making a valid point about resulting.

People may be choosing inaction because the choice is public. The disparity between what they expect robots to do and what they expect of other people suggests that they may be concerned about being judged for the one death that comes from pulling the lever.

To a person standing by the lever, a trolley on course to hit five people is a matter of luck. That the trolley was heading down the track in the first place was not under that person’s control.

But if we take control and change the trolley’s course, we expect to get the blame for anything bad that results from the changed course.

By inaction, we will sacrifice five to save one. But we don’t get blamed for that because the track the train was on was a matter of luck.

Malle’s data bears this out: humans expect humans to choose inaction from a moral standpoint and blame humans who pull the lever.

For robots, however, we blame them for the deaths on either track. And we are much more likely to think it’s morally wrong for a robot to do nothing.

We expect robots to make the utilitarian choice (saving five and sacrificing one) perhaps because we aren’t thinking about how we ourselves would be judged if we pulled the lever.


SO NOW THE TROLLEY IS A FINANCIAL MARKET
Who gets the blame for a market crash?
(Hint: It’s a “what,” not a “who”)

I discussed the issue of human vs. robot decisions with Michael Mauboussin, who gave me some valuable insight about how this applies to the financial markets.

By definition, markets go down when there are more sellers than buyers.

When a bunch of human traders sell off and the market tanks, people describe that as the market going down. But when algorithmic trading causes the sell-off, the quants get blamed as causing the downturn.

It’s no longer, “The market went down.” Now it’s, “the quants made the market crash.”

Mauboussin pointed me toward a recent Matt Levine column on Bloomberg.com, “The Robots Are Studying the Humans.” He quotes quant hedge-fund manager Clifford Asness (from a Bloomberg.com interview by Erik Schatzker):

One of the problems I think quants have is we are reasonably transparent. No one frickin’ knows what the average judgmental active manager is doing. If they caused the [May 2010] Flash Crash by all panicking at the same time, we would just say the market went down. Quants are easy to identify and lump together.

(This isn’t a new thing. In the “Black Monday” crash of 1987, in which the Dow fell 22.6% in one day, “program trading” received much of the blame.)

Once again, we hold algorithms to a different standard than humans.

With a human decision-maker, we are willing to allow for uncertainty (like luck) to bubble to the surface as an explanation for a bad outcome. This is especially so when the decision that led to the bad outcome is a status-quo decision, like letting the trolley continue down its track.

We feel like we understand the way a human thinks so, when a human makes a decision, we are more likely to allow the outcome to go into the luck bucket.

When it’s an algorithm, though, everything goes in the skill bucket. If a car or trolley or financial market crashes, we blame the algorithm.