POLITICAL PARTIES DRIVE MORAL JUDGMENT –
Recent studies and examples –
The new wrinkle: outgroup cues are particularly strong
@DaveCiuk just published a paper in Research & Politics (@Res_Pol) about how people shift their moral judgments based on partisan cues.
Intuitively, it feels like we choose our political party based on our moral positions, picking the party that best aligns with us and if the party’s positions shifted away from our moral positions, we would leave the party.
Perhaps the initial sorting may be driven by ideology, but once we’ve picked our party, research shows that we bend and change as our party’s positions change.
Why is that?
As @JayVanBavel and Andrea Pereira succinctly put it in “The Partisan brain: An Identity-based model of political belief,” we get five things from being part of a tribe. “Social groups fulfill numerous basic social needs such as belonging, distinctiveness, epistemic closure, access to power and resources, and they provide a framework for the endorsement of (moral) values.”
Two of those – “epistemic closure” and a “framework for endorsement of moral values” – help explain how political parties affect our beliefs and moral judgments.
Political tribes offer us certainty in an uncertain world.
In forming any belief, there is always hidden information. And it is not always clear how we should process the information that we have. This creates uncertainty—uncertainty that makes us uncomfortable.
Political tribes offer us the chance to close the loop. To shut the uncertainty down.
A tribe creates certainty about what we know to be true and false because the tribe offers a trusted source to intermediate between us and the deluge of information potentially relevant to our beliefs.
The framework for endorsement of moral values does the same with uncertainty about what’s right and wrong.
Morally, few things are universal. We all agree killing is wrong, but what about killing in war, or in self-defense? Or the death penalty?
There is a lot of grey area in moral judgment. There is a lot of uncertainty.
The tribe helps us sort it all out.
That’s why political parties drive our moral judgments, as well as our factual beliefs.
As I mentioned with the following examples in the February 16 newsletter, there have been some prominent instances recently of how we’ve followed our political tribe through attitude shifts. Here are a few of those big shifts:
- Dems vs. Reps had similar attitudes about the NFL at the beginning of the 2017 season. When Trump criticized the NFL over players kneeling during National Anthem, Republicans drastically changed attitude about the league, as shown by polls accompanying an article in the New York Times.
- American approval ratings of Putin rose from 2015 v. Feb. 2017, but only among Republicans, shown in a Gallup Poll.
- Democrats and Republicans had similar attitudes toward the FBI in 2014.Since then, Republican approval of the agency has sharply declined while Democratic approval has sharply increased. From @AmeliaTD in @FiveThirtyEight.
Ciuk’s findings are consistent with these ideas and examples, and even extend them: We do more than merely follow what our party (ingroup) tells us. Mostly, we take it upon ourselves to figure out what the other party (outgroup) believes and make sure we’re believing the opposite.
“The results show that while these group-based cues do exert some influence on moral foundations, the effects of outgroup cues are particularly strong.”
When we imagine members of the opposing party, we are wildly inaccurate –
And we don’t do much better imagining the members of our own party
A new article on @FiveThirtyEight by @PerryBaconJr cited research that “Americans are fairly misinformed about who is in each major party – and that members of each party are even more misinformed about who is in the other party.”
We really don’t like or trust the other political party. And it turns out that we don’t actually know who “they” are.
We don’t view the members of the other party as individuals but rather as a collection of whatever our stereotype is of the typical member of the party.
That doesn’t even follow any internal logic.
Earning $250,000 a year would put you in the top 2% of income earners. How could 44% of Republicans be earning in the top 1-2%?
Even if zero Democrats earned $250,000 a year, that would mean that only 4-5% of the country could be Republican, if almost half the party earns that much!
We’re not even very accurate when it comes to knowledge of the composition of our own party. For instance, Democrats thought 29% of fellow Democrats were gay, lesbian, or bisexual.
(H/t @JedKolko, whose tweet brought the FiveThirtyEight.com article to my attention.)
THE STICKINESS OF POLITICAL MISINFORMATION
New research shows how corrections moderate the effect of misinformation …
… but not so much for politics
Nathan Walter and Sheila Murphy conducted a meta-analysis of 65 studies on correcting misinformation about science, health, politics, marketing and crime. A summary of their findings reported in @PsyPost included the good news that in some areas, corrections had a moderate effect on counteracting misinformation.
Unfortunately, corrections don’t do much for political misinformation. According to the PsyPost.org article:
Even when you correct political misinformation, the effect of the correction decays over time – but not the initial misinformation.
That means even if, initially, people remember the correction, over time the correction fades and people remember only the initial falsehood.
The misinformation is stickier than the retraction.
It’s not all bad news. The research did provide some suggestions for issuing more effective corrections.
According to the abstract:
This is a particularly big problem with political misinformation, but the challenge exists everywhere. If we form a belief on incorrect or incomplete information – and we ought to always be getting more or better information on our beliefs – it’s difficult to dislodge the initial belief.
TOUGH SELL FOR A HEADLINE: A ROBOT didn’t KILL SOMEONE
Human error gets the blame for the fatal Uber self-driving crash …
… but how do you get people to notice?
A Tempe, Arizona, police report recently shed some additional light on the March death of a pedestrian from an accident involving an Uber self-driving car.
Despite headlines at the time decrying the safety of autonomous vehicles, the report highlights human error, not failure of the technology.
Significantly, distraction by the (back-up) driver was a factor.
According to a @WashingtonPost article, the police discovered the back-up driver was streaming an episode of “The Voice,” and “was distracted and looking down” for 31% of the 22 minutes before the crash, including 5.2 of the last 5.7 seconds.
That Tempe auto accident sounded an alarm to the public about self-driving technology.
As I said in the newsletter on April 27, “The general consensus seems to be that the tragic result means that it was a bad decision to have autonomous vehicles on the road in the first place.” (The newsletter item summarized a deeper dive on the media, public, industry, and political reaction immediately following the accident, from a piece I wrote for Smerconish.com.)
That accident was the first of several highly-publicized items casting doubt on the safety of self-driving technology.
Almost as if the news cycle were contagious, a May 4, Chandler, Arizona, accident involving a Waymo self-driving van became international news. In the May 11newsletter, I pointed out (citing and summarizing the reported facts, which told a different story than the headlines), that “It seems clear that the Waymo vehicle literally had nothing causal to do with this crash.”
This attitude has serious consequences on decision making. When a decision involves making an innovative choice, we are more likely (compared to similar results from a status-quo choice) to blame decision quality rather than luck.
Even as we recognize that these outcomes are not so clearly the fault of self-driving technology, we blame the technology because we don’t understand it. It’s new and there is no consensus around it.
Failing conventionally is more palatable. How much more palatable?
Think about the lives that could be saved if this new technology were even a few percentage points safer than human-driven cars.
The bar shouldn’t be so high, especially considering the toll (in lives, injuries, and economic costs) due to human error by drivers which, almost by definition, won’t exist with autonomous driving technology.
This attitude threatens to slow innovation in the space.
It also distracts from the real inquiry into the comparative safety of autonomous vs. human drivers.
As more information filters to the public about these particular auto accidents involving self-driven vehicles, we get a more accurate picture.
But the additional stories don’t receive the same level of attention as the initial coverage.
And, even for those who notice the new information, the subsequent corrective information decays from public memory, leading to the potential that the initial information – potentially incomplete and/or inaccurate – endures.
DID YOU SEE THIS BEAR?
Ignore its sleepy expression, knotholes, and woodgrain-covered body –
The Welsh government thinks it’s a menace to the Welsh countryside
A driver in Wales claimed this old wooden statute, a roadside relic from a defunct wool mill, startled her, causing an accident. The Welsh government said they would remove it if the town doesn’t. This was according to an article by @MicheleDebczakin @Mental_Floss.
Maybe the bear is a danger.
But it seems more likely that self-serving bias is the real culprit here.
We have a history, especially from the perspective of behind the wheel of a car, of blaming anyone or anything besides ourselves in our accounts of accidents—even if that thing is a wooden bear statue.
We know this in part thanks to the work of Robert MacCoun and the personal account of game-theorist/terrible-driver John von Neumann. As I noted in Thinking in Bets:
We also know this thanks to these examples from real insurance forms collected by my dad, Richard Lederer, in his book Anguished English:
- “An invisible car came out of nowhere, struck my car, and vanished.”
- “As I reached an intersection, a hedge sprang up, obscuring my vision, and I did not see the other car.”
- “The telephone pole was approaching fast. I was attempting to swerve out of its way when I struck my front end.”
- “The accident was entirely due to the road bending.”
THIS WEEK’S VISUAL ILLUSION
Thanks – again – to @AkiyoshiKitaoka