Subscribe to Annie's Substack

“When An Auto Accident *Doesn’t* Happen” — Annie’s Newsletter, Sept 14, 2018

Why “Tesla Autopilot saves a life” is barely news Tesla’s Autopilot technology includes a collision-avoidance feature that an owner recently credited with saving his life.The owner (who has a YouTube channel, Tesla Canuckposted the dashcam footage and the video and account appeared on, a news site covering the electric transportation industry.The author of the piece, Fred Lambert, points out that this wasn’t the first time the site reported on Autopilot’s side-collision-avoidance feature saving the vehicle’s occupants.He noted the lack of coverage such incidents receive:

“When a Tesla vehicle crashes while on Autopilot, it gets an incredible amount of media attention. I think it’s only fair that those instances where the Autopilot actually avoids a crash get the same amount of attention, but it rarely happens.”

The imbalanced coverage is due to an unfortunate convergence of two things.

First, as I mentioned in last week’s newsletter about evaluating predictions (of psychics and everyone else), we tend to focus on commissions more than omissions.

We notice when something happens. It is harder to notice the absence of something happening.

Second, bad news is much more attention-grabbing than good news. (Damn you, amygdala!).

News outlets are more than happy to oblige negativity bias, serving up lots of stories of doom to keep us watching and reading.

A death or potentially-deadly auto accident grabs more attention than when someone in a car gets to their destination without incident.

This convergence will have a negative effect on the public’s acceptance of a technology that has the potential to save a lot of lives.

To make reasonable judgments about the safety of autonomous vehicles, the public needs to have good data. But the public isn’t getting an accurate view of the risks.

There is wall-to-wall coverage whenever there is an accident resulting in injury or death but crickets when a self-driving car saves a life.

The public is getting an unbalanced view of the risks/limitations and benefits of the technology. That’s going to impede people’s ability to make rational decisions about the safety of the technology as they’re trying to decide for themselves whether they want those cars on the road in their communities.

The concept has been challenged, but I think it has (at most) been misplaced

The concept of loss aversion (“the tendency to feel the pain of a loss more acutely than the pleasure of an equal-size gain,” h/t Richard Thaler) has come under attack recently, based on a critical review of the concept by David Gal and Derek Rucker. As Professor Gal said in a summary of the review in Scientific American, “loss aversion is essentially a fallacy.”

Barry Ritholtz wrote an excellent explanation of how Gal and Rucker didn’t sufficiently rebut the accumulated research on loss aversion. I recommend you check out his reasoning.

I just wanted to highlight a few things from the debate. First, as Ritholtz points out in his conclusion, other cognitive biases can be conflated (or coexist) with loss aversion:

“Where I suspect the authors went astray was in the conflation of various cognitive failures, biases and heuristics with loss aversion. Consider for a moment a Las Vegas casino. If people were truly loss averse, the counterargument might suggest that casinos shouldn’t exist. But they not only survive, but thrive. This is due to other powerful cognitive errors: 1) people tend not to understand how the odds are stacked against them and in the house’s favor; 2) others understand the probabilities, but irrationally believe they are an above-average gambler; 3) others simply gamble for its entertainment value and are willing to accept the inevitable losses.

The mere fact that gain-seeking behavior exists hardly eliminates loss aversion as a phenomena.”

Second, the burden should be pretty high for dislodging a principle with so much common sense, real-world experience, and research behind it. Not that an established principle should be unassailable or incapable of modification or even reversal. But we should be skeptical about a claim that such a robust idea as loss aversion is a fallacy.

On Twitter, Richard Thaler got involved in the discussion of Ritholtz’s critique of Gal and Rucker, and he made an excellent additional point by citing the famous article he wrote with Matthew Rabin, “Anomalies: Risk Aversion,” in 2001.

Rabin and Thaler pointed out in that article why, when you measure for loss aversion at stakes too small for the subjects to care about, loss aversion isn’t going to seem very robust. (What’s “too small,” of course, varies from person to person.)

This has been referred to as the “peanuts effect.” When you’re playing for peanuts, an amount of money that doesn’t matter to you, you don’t care much about losing it.

Gal and Rucker’s paper – I’m referring to a prepublication version – recognizes that it is primarily experiments involving low-stakes choices where loss aversion is absent:

“The stakes of the outcomes in risky choice experiments that do not show evidence of loss aversion tend to be low to moderate (from less than $1 to as high as $100). Conversely, some experiments that involve higher stakes (e.g., several hundred dollars) have shown a tendency among individuals to choose the safer alternative.”

If we make the stakes high enough, there is some point where pretty much everybody is going to feel the loss more than the win. But it is also true that if we make the stakes small enough, there is some point where pretty much everybody isn’t going to care either way.

Poker is one of those real-world activities in which you can see the robustness of loss aversion.

And it is also one of those real-world activities where you won’t see loss aversion if the players are playing for peanuts.

If you see a successful high-stakes poker player sitting in a low-stakes game, you’re likely not going to see that player’s A-game.

When I played poker, I tried to use the fact that it is hard to play your best game at small stakes as an exercise to improve my decision making.

I would sometimes purposely play very small stakes games, much smaller than I normally played, so it felt like I was playing for peanuts. The goal? To play as well as if I were playing for high stakes to train myself to focus on the quality of my decisions and the execution of strategy, rather than how much money I might be winning or losing.

If you’re in a game where the best play is to execute a big bluff, you need to be able to execute that play regardless of whether bluffing means risking $5 or $5,000. Either way it is the correct play and you don’t want money to get in the way.

Betting $5,000 on a bluff can be scary, especially if that’s all you have in front of you, or all you wanted to allocate to the game, or if you just lost a lot – or numerous other conditions that could influence us other than whether it’s the best decision.

By training myself to play well when the stakes were small, I was training myself to be able to make those big bluffs because I was learning that executing good strategy is executing good strategy no matter the stakes.

It was difficult (and I think productive), but it reminded me that we don’t always make the same decisions when the stakes are much lower (or much higher).

Insight on the effects of resulting and outcome-based evaluations

Morgan Housel wrote a great piece on about the additional personal risk that investment professionals face when considering a non-consensus choice. 

“If you invest in a divergent system and it goes wrong, you have massive downside for your career personally, separate from the organization. It could be the right decision – it was probabilistically a great bet. But if it goes wrong, and it looks different, you could get fired. And if it goes right, you still may not have enough upside career-wise.” (Quoting private-equity investor Brent Beshore.)

Housel’s explanation of this was simple and elegant: We measure performance against some reference point. When someone considers a familiar choice, the reference point is low.

We have seen the decision before. We understand the choice. There is consensus that the choice is a sound one.

That means that if it doesn’t work, there’s room to blame luck.

But if someone considers a novel choice, we don’t have a reference point. There isn’t consensus that the decision is sound.

Collectively, then, we judge outcomes following novel choices to a higher standard.

That leads to avoidance of the road less traveled.

I loved Housel’s piece and tweeted this summary:

“We judge outcomes borne of conventional choices much less harshly than those born of unconventional ones.

What kinds of choices do you think that drives people to (rationally) make?”

Jim O’Shaughnessy quoted my tweet, replied, and this led to a conversation (which also included Brian Portnoy) about how to get people to weather that storm, make the best long-term choices, and not be disproportionately worried about the failures.

The replies to O’Shaughnessy’s tweets are well worth the read.

My takeaway? Turn tribalism on its head, as I mentioned in that thread:

If we’re a part of the tribe that rewards taking well-considered divergent paths, the tribe can help us overcome our instincts to cast blame after a bad outcome.

Think about it

This past week, Daniel Pink tweeted about a Baltimore elementary school that replaced detention with meditation. The article, on Upworthy, points to positive effects (lower suspensions, better attendance) reported by several schools trying it.

Obviously, these results don’t carry the weight of controlled, large-scale scientific experiments. But the schools involved think it helps. It seems like it ought to help.

There is a larger effort in science to evaluate the benefits of mindfulness.

One of those studies recently concluded that even a little meditation – by people with little training – has positive effects.

Catherine Norris and colleagues, reporting in Frontiers of Human Neuroscience, gave subjects a 10-minute guided meditation tape and tested their ability to focus against a control group.

They found the small dose of meditation improved “executive attentional control even in naïve, inexperienced meditators …. suggesting that individuals who are merely initiating a meditation practice may reap benefits after a single brief session.”

They successfully replicated the result in a separate study on a different attention task. They also measured subjects’ neural activity and found changes consistent with improved attention.

(This is a very brief and general summary of what they studied and found. If you’re interested in the strength of the results, as well as the limitations, I recommend that you read the study itself. H/t MediBulletin for its article about the findings, which brought the item to my attention.)

So what does mindfulness have to do with decision making?

A necessary component of a making a high-quality decision is decision fitness, being in an emotional state where we are capable of thinking clearly and rationally.

Being in a decision-fit state is the foundation of any good decision.

If we don’t have the emotional component in place, our ability to execute on all the other elements of good decision-making can fall apart.

We know, for example, that when we’re emotional we’re more likely to overvalue our present-self in relation to our future-self. We’re also much more likely to be reactive.

That’s where mindful practice comes in. The research suggests that incorporating mindfulness into your life means you will be much more likely to be decision fit—calmer, less reactive, more compassionate, and more attentive.

Our future depends on it

Jon Haidt and Greg Lukianoff, authors of The Coddling of the American Mind, recently wrote an op-ed for the New York Times about the benefits of free play—unstructured and unsupervised– and its importance to a healthy, functioning democracy.

“Democracy is hard. It demands teamwork, compromise, respect for rules and a willingness to engage with other opinionated, vociferous individuals. It also demands practice. The best place to get that practice may be out on the playground.”

Their point applies to all kinds of strategic thinking.

Lack of play is very bad for decision making because that’s where kids figure out things like how to negotiate, how to pick teams, and how to decide on rules.

It’s a great piece (and a great book).

A treasure for football fans – and anyone looking for tips on leadership and culture

Michael Lombardi has had an illustrious career in professional football.

He has been an executive or coaching assistant with five NFL franchises.

He has three Super Bowl rings.

He has worked with legends like Bill Walsh, Al Davis, and Bill Belichick.

He is also a skilled sports analyst and student of decision science (and a friend of mine).

His book, Gridiron Genius: A Master Class in Winning Championships and Building Dynasties in the NFL, came out this week.

Obviously, the book is great reading for football fans. It is also a must-read for lessons on leadership, even if for those who don’t follow football.

In particular, using lessons from his time working with Bill Walsh and Bill Belichick, the book focuses quite a bit the importance of culture.

Lombardi drives home that no matter how good the strategy, if you don’t have a healthy and productive culture set up around you, the great strategy is going to fall apart.

That makes this a great read for people trying to get tips on leadership, particularly leadership around how to set up a productive culture where people can really thrive.

H/t AkiyoshiKitaoka

Does this look like it’s moving?