Subscribe to Annie's Substack

“Steven Pinker is on One Track; The New York Times is on Another” – Annie’s Newsletter, March 2nd, 2018

It’s the start of a hypothetical moral question, with no right answer and a lot of emotion

I have been watching with great interest as Steven Pinker’s new book, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, has attracted some pretty vehemently polarized reactions. I have to admit, I have been a bit confused at why some people seem to be so mad at his core message: things have been generally getting better over the past few hundred years. By “things,” Pinker is referring to measures including poverty, violence, infectious disease, war, and starvation, to name just a few.

After reading this book review by @jenszalai @NYTimes, I think I have a better understanding of why some folks are so mad.

The review’s title gives you a preview of its perspective:
“Steven Pinker Wants You to Know Humanity Is Doing Fine. Just Don’t Ask About Individual Humans.”

The title made me realize that many are reading Enlightenment Now as a 550-page treatise on The Trolley Problem, a classic thought experiment that evokes strong emotional reactions.

The Trolley Problem has many variants, but it basically goes like this: You see a trolley barreling down a track toward five workers. The only way to avoid killing them is by pulling a lever to divert the trolley to a side track, but there is one worker on that track who will be killed if you do it.

Do you pull the lever?

Most people say, “yes, I would pull the lever.” It seems like a simple problem: one person is getting run over, instead of five.

But here’s where the thought experiment gets emotional.

Imagine you are standing on a bridge next to a man and you see the trolley coming at the five workers. The only way to stop the trolley is to push the guy next to you off the bridge in front of the trolley.

Would you push one person in front of the trolley (killing the guy) to save the five workers?

To some people, it’s the same question as pulling the conductor’s lever. One person dies to save five others. But for most people, even if they would pull the lever in the first example, they wouldn’t push the guy in front of the trolley in the second. They view pushing that guy as murder.

Pulling the lever and pushing the guy in front of the trolley are not viewed as morally equivalent. For most people, it matters how you get to the outcome of one person sacrificed to save five.

And as with all moral questions, emotions can run high. The Trolley Problem (which has no right answer) divides people. (Perhaps you even had an emotional reaction to my saying there is no right answer!)

One of Pinker’s examples, noted in the review, is the overall improvement in living conditions throughout the world from expanded economic opportunity.  Yet, that improvement has come at the expense of the American lower middle class.

Sounds like the trolley problem. We improved the lives of many around the world at the expense of folks losing their jobs to overseas competition or automation.

Should we be, on balance, think the world is better for it?

I guess that depends on your answer to the trolley problem. I have a pretty good guess as to Jennifer Szalai’s answer.

Will you like it? 89% if you follow me on Twitter, 94% if you get this newsletter

Earlier this week, I encouraged people who follow me on Twitter to sign up for Michael Smerconish’s (@Smerconish) newsletter, “The Ish List.”

The newsletter, part of the expansion of into original content and analysis (including two pieces I contributed), is trying to represent a more down-in-the-middle political view. While an independent political standpoint is still spin, Smerconish is trying to give a much more two-sided view of things.

I think that his voice is really valuable in political discourse right now.

True to putting percentages on my beliefs, instead of saying “I’m sure won’t regret it” or “You’re almost certainly going to like it,” I put the percentage likelihood at 89% you’d be happy following my recommendation. I love that some people picked up on that:

Further to that regard, see the next item ….


Andrew Mauboussin wants your help in finding out

Andrew Mauboussin (@amaub) is conducting a survey on the range of probabilities that people assign to natural language words meant to express uncertainty (like “maybe,” “likely,” and “almost always”).

Go take the survey on It’s fun, and after you take it, you can get a plot of where you sit in comparison to other people on what probabilistic words mean. I’m guessing you will be struck by how wide the range of probabilities is that we assign to these types of terms.

There’s a lot at stake when we use imprecise words to communicate probabilities. We may intend one thing but our listener may interpret what we say as a wholly other thing and that can be of great consequence.

Here’s a story (retold many places, but I saw it most recently in this Forbes article) famously illustrating what’s at stake when we communicate these terms:

Buffett won, but who made the better bet?

Long Bets is a non-profit promoting competitive, accountable predictions, with the proceeds from bets going to charity. Long Bet #362 started as a prediction by Warren Buffett, which he offered to back with $500,000:

Ted Seides, a founder of Protégé Partners, took the bet and picked five funds-of-funds whose results were averaged and compared against the Vanguard S&P index fund. In the end, Buffett won the bet. Inc.’s article about the end of the bet expressed a conclusion that most people likely came to:

But does the result of this one bet really support the case that Seides was, to use Inc.’s language, “just wrong” here? This is one bet. The wager was even money. And we don’t know what the probability was of either side winning. Maybe Buffett was 10% to win and got lucky. Maybe he was 90% to win and Seides made a horrible bet. The problem is we can’t really know and declaring anyone to be wrong or right here is resulting.

In fact, Seides made this point in a piece for, appearing while the bet still had eight months to run, titled “Why I Lost My Bet With Warren Buffett.”

I have to agree with Seides here. I’m not convinced by the outcome that Buffett necessarily had the better end of the wager. Nor am I convinced by the outcome that Buffett had the worst of the wager. The one outcome itself just isn’t enough to tell me who had the right side. We’d have to dig a lot deeper to determine who made the best bet.

The outcome we can all be happy about is that a great charity, Girls Incorporated of Omaha, got $2,222,278.

Long Bets is great, both for its promotion of philanthropy and discussion of ideas.

And it is a great example of the idea of thinking in bets!


I am a bit of a collector of visual illusions, partly because they remind me, as Daniel Kahneman explains in Thinking, Fast and Slow, that doesn’t make it go away. The same is true of cognitive illusions or biases.

Plus, visual illusions are fun! So I am going to start sharing a visual illusion each week in this newsletter.

Via @AkiyoshiKitaoka: