“ACTION BIAS”: THE DIFFERENCE BETWEEN BUSYNESS AND PRODUCTIVITY
Doing something is MORE than doing nothing – but not necessarily BETTER
Doing something feels better than not acting – even when immediate action isn’t smart or productive, or when “not acting” includes “thinking.”
We have a bias toward action over idleness that can hurt our productivity and decision quality.
This thought-provoking piece includes a great example from soccer that reveals the cost of our bias toward doing something:
Consider the case of professional soccer goalies who need to defend against penalty kicks. What is the most effective strategy for stopping the ball? Most of us think that if we were in their shoes, we would be better off jumping to the right or to the left. As it turns out, staying in the center is best. Research has found that goalkeepers who dive to the right stop the ball 12.6% of the time and those who dive to the left do only a little better: They stop the ball 14.2% of the time. But goalies who don’t move do the best of all: They have a 33.3% chance of stopping the ball.
Nonetheless, goalies stay in the center only 6.3% of the time. Why? Because it looks and feels better to have missed the ball by diving (an action) in the wrong direction than to have the ignominy of watching the ball go sailing by and never to have moved. The action bias is usually an emotional reaction to the sense that you should do something, even if you don’t know what to do. By contrast, hanging back, observing, and exploring a situation is often the better choice.
More from @NFLDraftJoey‘s analysis of NFL decision-making
Sports (like poker!) is a great laboratory for studying decision-making. Last week, I wrote about tanking as a team-building strategy, with insights from the @ITPylonarticle, “Which Process to Trust?”
I want to return to the article for a separate point, which doesn’t just offer an insight for sports-management decisions, but also has some potential applications – usually overlooked – for decision-making in general:
It makes less sense to deploy a risky team-building strategy like tanking when a team is on the bubble for the playoffs (either just below or just above the cut-off).
This is partly because of resulting.
This is especially problematic in football, which has fewer games so there is a greater influence of luck in a season’s outcomes.
The more luck in the outcome, the less that you can work backwards from the quality of the result to decision quality.
The fewer games in a season, the greater the influence of luck. The fewer games in a season, the more the sport is like poker rather than chess.
But coaches and management are judged on a season’s outcomes. The fans, the ownership, and the media all engage in resulting.
Add to this that the influence of luck in a shorter season’s outcomes means that, in some sense, nearly all NFL teams are on the bubble. With only sixteen games to play, just a few unlucky close losses can mean missing the playoffs.
This all predicts that tanking should be relatively uncommon in the NFL, compared with MLB or the NBA, as the article nicely lays out.
Most teams have an opportunity to make a run at the playoffs when the season begins no matter the pre-season projections. There are too many variables due to the lack of large sample sizes in the NFL to make bulletproof pre-season projections as a 16 game regular season and changing rosters from year to year don’t allow us to make many definitive claims. It’s largely why there are marginal differences between a 10-6 team and a 6-10 team season to season. Case in point, over the course of two seasons the Philadelphia Eagles finished 7-9 and then won the Super Bowl the following year. That’s a far quicker and more successful turnaround than their basketball cohorts who reside across the street can say for themselves.
If a franchise is potentially not far from being a contender, adding a few expensive veterans may have a positive value, but only in the very short term.
It can at least satisfy the fans – remember that action bias! – that management took action to get the team to the playoffs.
But such moves don’t help the franchise toward its long-term goal of fielding a dominant team. The cost (which can include sacrificing developing players and draft picks) can deprive the franchise of value for years.
What does this all mean?
In sports (especially football), if you’re in the broad middle of the standings, you’re less likely to tank. It’s a more likely option if you are at the bottom.
The same reasoning frequently applies in business and other kinds of decisions.
What we rarely realize is that when we are in a strong position, that’s also when we have the room to take on risk.
Typically, a successful organization becomes risk-averse, unlikely to make big changes that veer from the status quo.
When you’re in a position of strength, you can lose sight of the long term, focusing on maintaining your success in the near-term, giving multiple out-of-contention competitors a chance to topple you.
This is true whether it is the Philadelphia Eagles or Amazon or Apple.
I recently read about a good example of such open-mindedness in a novel strategic choice by Coca-Cola. Jake Rossen in @Mental_Floss told the story of how, in 1992, Coke purposely tanked its Tab brand and its new clear-cola product, to sink Pepsi Clear.
But Zyman wasn’t content to simply try to compete with Crystal Pepsi. In his mind, Tab Clear was what consumer brands refer to as a “kamikaze effort,” a product expected to fail. Zyman believed that the presence of Tab Clear on shelves would confuse consumers into believing Crystal Pepsi was a diet drink. (It wasn’t, though there was a Diet Crystal Pepsi version available.) By blurring the lines and confusing consumers who wanted either a calorie-free drink or a full-bodied indulgence, Zyman expected Tab Clear to be a dud and bring Crystal Pepsi down right along with it.
“It was a suicidal mission from day one,” Zyman told author Stephen Denny for his 2011 business book, Killing Giants. “Pepsi spent an enormous amount of money on the [Crystal Pepsi] brand and, regardless, we killed it.”
Being in a strong position doesn’t mean you should forfeit risky options.
If anything, a strong position affords you the opportunity to consider the broadest range of strategies, including the risky ones.
ET TU, ASTRONAUT ICE CREAM?
Another childhood belief: I heard, I believed … and now I know better
As I explained in the section titled, “Hearing is Believing,” in Thinking in Bets, our process for forming and updating beliefs is not as rational as we’d like to think. We tend to be uncritical in forming beliefs and then stubborn about updating them.
That’s how we’re so frequently inaccurate when the source of our knowledge is “something I remembered from when I was a kid.” Baldness isn’t passed along genetically only from the mother. Immigrants’ names weren’t changed at Ellis Island.
And now I have the sad duty to inform you – if you’re fellow children of the Space Age – that “astronaut ice cream is a lie.” In this article and video in @VoxDotCom by @PhilEdwardsInc, they burst the bubble.
The article and video contain a lot of fun and interesting information about space-program food – preferences, challenges, and anecdotes. For example, John Young (who died in January 2018) was reprimanded for sneaking a corned beef sandwich on Gemini 3 in 1965.
But while corned beef might have made its way into space, ice cream never did.
At least the video shows that astronaut bacon was for real.
DAN KAHAN’S MOTIVATED NUMERACY EFFECT: UNDER SCRUTINY…
OR UNDER THE BUS?
A glimpse of tribalism in social science
In speeches and newsletter items (and, prominently, in Thinking in Bets), I frequently discuss Dan Kahan’s work on motivated reasoning. (When I refer to “Kahan’s work,” that includes his colleagues on the papers involved as well as at the Yale Cultural Cognition Project (@Cult_Cognition).)
Kahan has focused on showing how our prior beliefs, which are woven into our identity, motivate the way we process information. In particular, he has studied how political ideology drives the way we process information that conforms to or contradicts our beliefs.
Spoiler alert: we process information in a way that is biased to support and strengthen our beliefs, particularly those beliefs that define our identity, such as our political leanings.
Kahan’s study, “Motivated Numeracy and Enlightened Self-Government,”demonstrated that while people with better math skills do better at correctly interpreting data on a neutral subject (whether a skin cream is an effective treatment for a rash), those skills do not predict more accurate interpretation of identical data when that data concerns a politically-charged subject (whether a gun ban worked in reducing crime).
We see the data clearly when the conclusion doesn’t challenge our identity.
People don’t have strong beliefs about skin creams so, as expected, the better a person’s ability to interpret data, the better they are at correctly determining (based on reviewing the data) whether a skin treatment is effective.
But data interpretation is clouded when the conclusion challenges our identity.
People do have strong beliefs about gun control. Because those beliefs are closely tied to their political affiliation, it is part of their identity.
Kahan found that numeracy is not a good predictor of how well people interpret data when the data concerns their political beliefs.
Being better with numbers doesn’t offer an advantage in processing information in an unbiased way when the data doesn’t support the conclusion we want to reach.
In the case of skin cream, the data drives the conclusion.
In the case of gun control, the desired conclusion drives the interpretation of the data.
That’s motivated numeracy.
Kahan’s result has been widely cited, including by me.
But now there’s chatter on Twitter that Kahan’s results may be another episode in the current “reproducibility crisis” because a study in 2017 by Cristina Ballarini and Steven Sloman included a claim that “we failed to replicate Kahan et al.’s ‘motivated numeracy effect.'”
Lee Jussim (@PsychRabble), in particular, cited this later study and declared, “Another one bites the dust. Failure to replicate Kahan et al.’s ‘motivated numeracy’ finding.”
Kahan had already replied to the Ballarini-Sloman finding with the paper, “Rumors of the ‘Nonreplication’ of the ‘Motivated Numeracy Effect’ are Greatly Exaggerated.”
He and his colleagues addressed the need (given the effect size) for a well-powered study with a diverse subject pool in the original study and in a replication conducted before the initial version was published.
Kahan’s initial study had 1,100 subjects and the replication had an N of over 700, measured as heterogeneous in a multitude of categories (age, income, education, numeracy, political affiliation, and outlook).
On the other hand, the external replication attempt suffers from both flaws: a small and homogeneous sample population.
Ballarini and Sloman had a sample size of just 55, all of whom were college students. Only 1% identified as conservative-to-very-
What’s weird here is that the original study and the replication flip the usual script.
Generally, when a result does not replicate, it is the original study that is underpowered or the subject pool is insufficiently heterogeneous. Here, the well-powered study was the original and the failed replication was, in relation, underpowered and drawing from a homogeneous sample population.
Ballarini and Sloman didn’t try to hide this. After noting that Kahan’s initial study had 1,111 participants, compared to their 55, they said:
As Kahan et al. note, in this paradigm, the “strength of inferences drawn from ‘null’ findings depends heavily on statistical power’, and our sample may have been too small to detect the effect of motivated numeracy those researchers found.”
Given the small sample size of the Ballarini-Sloman study, you would expect a failure to replicate. Kahan’s response pointed out that “A power analysis indicates with an N of 55, the probability of replicating our results was 0.09.”
(Jussim, to his credit, later in the thread admitted the declaration “went too far” and “was overly dramatic,” although he maintained “the orig finding was implausibly extreme” and said, “I am now taking bets on whether independent researchers can replicate the pattern.“)
So what gives? The replication crisis is supposed to be about pushing scientists to be more rigorous in their methodology, especially in the size of their sample and diversity of their subject pool.
This seems like a case where the headline (“result did not replicate”) somehow obscured a reasonable comparison of the studies’ statistical methodologies in evaluating motivated numeracy.
I suspect this has something to do with a form of tribalism. In this case, the tribe is whether you are part of the open science movement.
We know your ideology can drive you to identify with a tribe. But once you belong to the tribe, the tribe drives your ideology.
One of the points of Kahan’s work on motivated numeracy is that we don’t use “System 2” skills in math or reasoning when the decision is based on something that’s protective of or threatening to our identity.
Ironically, if there’s a tribal rush to throw a study under the bus when there is any claim of non-replication (and likewise when there is an immediate rejection of such a claim), it could be an example of motivated numeracy.
GM GENIUS: SEASON 2!
Using fantasy football to teach students decision-making skills
I am the co-founder of the nonprofit HowIDecide.org (@HowIDecide). How I Decide’s mission is to equip youth with better critical thinking and decision skills to manage everyday habits, in-the-moment choices, and deliberate decisions throughout their lives.
Students ages 13-22 are eligible to participate in the free competition, which awards over $25,000 in college scholarships and other prizes. It’s available as a mobile app for both iOS and Android.
Nearly 1,000 students participated in Season 1, and we are expecting significant growth this season.
The competition coincides with the 2018 NFL Season, and you can get details on the website.
THIS WEEK’S ILLUSION: CHECKERSHADOW
h/t Edward H. Adelson, Perceptual Science Group @ MIT
The squares marked A and B are the same shade of gray.