For this current season of the Alliance for Decision Education podcast, I had the pleasure of interviewing Dr. Steven Pinker (who I have known since I was 24 years old!).
Steven Pinker is an experimental psychologist who conducts research in visual cognition, psycholinguistics, and social relations. He grew up in Montreal and earned his B.A. from McGill and his Ph.D. from Harvard. Currently the Johnstone Professor of Psychology at Harvard, he has also taught at Stanford and MIT.
Steven has won numerous prizes for his research, his teaching, and his books, including The Language Instinct, How the Mind Works, The Blank Slate, The Better Angels of Our Nature, The Sense of Style, and Enlightenment Now. He is an elected member of the National Academy of Sciences, a two-time Pulitzer Prize finalist, a Humanist of the Year, a recipient of nine honorary doctorates, and one of Foreign Policy’s “World’s Top 100 Public Intellectuals” and Time’s “100 Most Influential People in the World Today.” He was Chair of the Usage Panel of the American Heritage Dictionary and writes frequently for the New York Times, the Guardian, and other publications. Steven’s twelfth book, published in 2021, is called Rationality: What It Is, Why It Seems Scarce, Why It Matters.
Our conversation was wide-ranging, not surprising since Pinker is quite a polymath. I had so much fun with this one, and learned a lot.
Among the topics we discussed:
Do our brains trick us into thinking things are worse than they are?
Why we make irrational decisions and to share some useful tools to help improve our judgment.
How our brains are wired for nostalgia and why we erroneously think that the world is getting worse.
How expected value can be a useful decision-making tool.
And finally, game theory and negative externalities, revealing what hockey players wearing helmets have in common with ordinary citizens trying to save the planet!
Here is a transcript of our conversation (slightly edited for brevity).
Annie: I’m super excited to welcome my guest today, Steven Pinker…What people don’t know is actually how long we’ve known each other.
Steven: Should we divulge the number of years or just say it’s been a while?
Annie: We could divulge. I’m wondering what your memory of the first time you met me was, because I know what my memory of it was, which involves sort of sheer terror, because you were such a larger than life character.
Steven: Well, and you were a rising star in our joint field of developmental psycholinguistics. And if memory serves, we met at the Boston University Conference on Language Development. I’m going to say 1991—1990, maybe 1991.
Annie: I’m going to say 1990.
Steven: Okay. And I think you’re right. I think it was 1990, and your advisors at the time were dear friends and strong scientific influences of mine, the late Lila Gleitman and her husband Henry Gleitman.
Annie: That’s exactly right. So Lila would love to talk about how I was presenting the paper, and then you stood up and asked me a question, and that I stayed toe to toe with you. That was the way she described it. I remember it as me sort of wilting under total fear.
Steven: No, by no means, absolutely no. No, Lila had it right. You hit it out of the park. You were clearly a rising star then and more than fulfilled your promise. And congratulations in advance, if I may say this on the podcast?
Annie: You may, you may.
Steven: I don’t know if your listeners are aware that Annie is going to defend her Ph.D. By the time this airs, maybe she will already be Dr. Duke. At the University of Pennsylvania. And congratulations to Annie!
Annie: Yeah, well, thank you very much. I mean, you know, this is obviously partly for Lila. You know, I was ABD, and so getting that done feels really good to me.
To start, I’d like to head to something that you said about this issue of availability bias, which is just simply: the more easily we recall something, which could come —the more vivid something is, the sort of emotional charge there is to it, the easier it is for us to recall—that we use that as a proxy for figuring out how frequently something occurs in the world. And when we think about what the news is showing us, you know, it’s not showing us—oh, in this place, everything went great today.
Steven: Indeed, for a number of reasons. One of them is that there is a negativity bias in psychology. We tend to dwell more on the negative than the positives. And we tend to, when we remember something, it’s not that we remember the positive things better. We remember negative and positive things. We tend to forget how negative they were at the time. And so memory kind of sands off all of the nasty aspects. And so we’re kind of, in that sense, we’re wired for nostalgia. But also, as you suggested, a lot of good things consist of nothing happening. That is—a city that stopped an attack by terrorists. A country that is not at war.
And so, just to take an example that was relevant to my childhood. There’s no war in Vietnam. That’s huge. Or at least, if I would think back to what it was like when I was a young teenager or a child, if someone were to say, “No war in Vietnam,” that would have been, like, massive headlines. That’s been true for 40 years, but it’s not news anymore. But it’s still true if you live in Vietnam. But we don’t call it news.
Likewise, 178,000 people escaped from extreme poverty yesterday. And the day before, and the day before that, and the day before that. And that adds up so that now the rate of extreme poverty in the world has fallen over centuries from 90% of humanity to less than 9% of humanity. But it never happened on a Thursday in October. And so it never generated a headline. But it transformed the world.
So there are all these ways in which our world changes that are just systematically missed by the news. But the news being a non-random sample of sudden events, and it’s easier for something to go wrong all of a sudden than for something to go right all of a sudden. And so combine that with the availability heuristic, and people are misled into thinking the world is always getting worse.
“There is a negativity bias in psychology . . . when we remember something, it’s not that we remember the positive things better. We remember negative and positive things . . . we tend to forget how negative [things] were at the time . . . memory kind of sands off all of the nasty aspects. And we’re kind of, in that sense, we’re wired for nostalgia.” – Steven Pinker
Annie: And then, you know, how do you think about, you know, as you just pointed out, about nostalgia, right? I mean, I think that if you stepped back and said to somebody, from a completely rational point of view, “Would you want to be alive in the 1600s?” Even if you were really, really rich, right? We know that your emotion is—yes, I’d love to be the richest person in town, even if it was 1650. But when you actually step back, you sort of realize, well, let me think about life expectancy. You know, let me think about—was there penicillin? What was going to happen if I got a tooth infection, right? Was I going to live through that?
Steven: As someone put it, “Would you like your surgery with or without anesthesia?”
Annie: Right, with or without anesthesia, which is so great. You know, I mean, I think about this relationship between how we’re processing the world and when we actually can get into something that looks more like—you know, Kahneman, we call it System 1 or more in the rational parts of our brain—that there does seem to be a conflict between those two things that does create a narrative, which makes us think that things are really bad. Even if we step back, it’s quite obvious that being middle class now is way better. In fact, being poor now is way better than being rich in 1600.
Steven: Oh, by far. Or 1800 or 1900. Absolutely. I mean, you know, the sons of presidents would sometimes die of an infected abscess from playing tennis.
Annie: Yes, that did happen.
Steven: I forget which son of which president that was. But, you know, before there were antibiotics. And to say nothing of comforts, even over the course of my lifespan, you still get a lot of nostalgia even now about the 1960s. But a typical middle class person in the 1960s, in my family, like we didn’t have air conditioning. Talking to someone in a different city, long distance . . .
Annie: That was very expensive.
Steven: Oh, my God. It was like, “Come run to the phone! It’s long distance.”
Annie: When I would call my parents, I would reverse the charges.
Steven: Same here. Absolutely. And, you know, you’re not that old. I like to think that I’m not that old. And that’s just in our lifetimes. And people are apt to forget those things as part of progress.
And of course not everything gets better . . .
Annie: Of course not.
Steven: . . . for everyone all the time. That would be a miracle. That wouldn’t be progress. And the whole reason progress happens is that people pay attention to the problems, and the threats, and the dangers, and they try to fix them. And so it doesn’t mean we can sit back and relax. Quite the opposite. Our forerunners didn’t. That’s why we enjoy progress now, and we had better hop to it or else our descendants could have it better or they could easily have it worse.
“The rate of extreme poverty in the world has fallen over centuries from 90% of humanity to less than 9% of humanity. But it never happened on a Thursday in October. And so it never generated a headline. But it transformed the world . . . so combine that with the availability heuristic, and people are misled into thinking the world is always getting worse.” – Steven Pinker
Annie: One of the things that you talk so deeply about in Rationality is the idea of expected value. So why don’t we just start with what is it? What is expected value?
Steven: Expected value is just—it is a concept from the theory of rational decision-making. How ought a rational decision maker choose among a set of alternatives under uncertainty? You can quantify it in games of chance like poker, which of course is the source of probability theory going back to Pascal.
But in life, we live with a lot of risk, where we just don’t know what the probabilities are. And that’s one of the reasons why games of chance can be so informative. But we still have to estimate as best we can, and expected utility is just—you consider all the things that could happen. You assign them probabilities, and you multiply the probability by the value of the outcome, positive or negative.
You add them up, and the theory of expected utility is you choose the option with the greatest expected utility—that is, the greatest sum of the product of the risks times the rewards. Now, expected value adds the twist that what we strive for, what means something to us, is not the same as what you can count, not the same as, you know, dollars. And so value makes it a little more psychological. But it’s basically the same idea.
Now, Tversky and Kahneman—famous— and others actually. There are a number of Nobel prizes that went to people who showed that the theory doesn’t exactly fit with human intuition. Including, by the way, well, in fact—Daniel Ellsberg, back in the early fifties, famous later for leaking the Pentagon Papers. And in a bizarre episode for having his records pilfered from a psychiatrist’s office by Richard Nixon’s henchman—one of the more bizarre twists of the Watergate scandal. And he’s still with us. He just wrote an op-ed a couple of weeks ago. Anyway, in the early fifties when he was a young man, he was one of the brilliant mathematicians who showed that even though the theory of expected utility seems absolutely impeccable, how could you possibly depart from it? He showed that actually—not so hard. So there is a debate ever since over whether humans actually are rational decision makers in the von Neumann sense.
“In life, we live with a lot of risk, where we just don’t know what the probabilities are. And that’s one of the reasons why games of chance can be so informative . . . you consider all the things that could happen. You assign them probabilities, and you multiply the probability by the value of the outcome, positive or negative. You add them up, and the theory of expected utility is you choose the option with the greatest expected utility.” – Steven Pinker
Annie: Yeah. So, sort of thinking about that. You know, do you feel that that’s because at the base of every decision is naturally a calculation of expected value or expected utility? Like if I’m trying to decide which routes to take to work, or frankly who to marry, that I’m trying to—you know, at the base of it, there is some sort of calculation going on. The problem is that thinking about, you know, Kahneman and Tversky’s work, or Thaler’s work, or you know, the work of so many people in that field, showing the way that we sort of process the world and process the information is less than rational.
That this calculation is going on, but the inputs are so bad that we end up with this issue of not actually acting necessarily in our rational best interest. In other words, if we could get to be perfectly rational about the way that we process information, then our decisions would necessarily end up being rational. But the problem is that we’re not perfectly rational in the way that we process information. At least not as we would understand rationality.
Steven: Yeah, I think that’s certainly a very reliable and important way of analyzing it. I mean, there are circumstances where it can be the other way around.
Where again, another classic finding in psychology, going back to Paul Meehl in the 1950s, is that often human experts make poorer predictions than even a simple statistical formula. We’re not even talking about fancy schmancy AI. This was before there was AI. But just, you weigh everything, you add them up, and the formula does better than the judge, the psychiatrist, the probation officer, the stock analyst. That’s a case where the human error loop is good at picking up the individual cues. That is, how trustworthy is this person? How many years of education they have—but not very good at integrating them. As I think, Paul Meehl, put it. If you went to the supermarket checkout line and you put all your groceries on the belt, and you said to the cashier, “Well, it looks to me like it’s about $72. Is that okay?” They would say, “No, you can’t—no human mind could look at a bunch of groceries and know how it adds up.” And the point is that when it comes to multiple cues, each of which jerks the probability up or down by a little bit, we’re really not very good at integrating them all.
We do have an analog intuitive sense, but it’s often outperformed by a numerical formula. It’s funny that it has not gotten as much attention as it deserves in the new concern over AI, and algorithms, and prediction. Where people call attention to the errors that AI can make, which is true. But they don’t ever compare it to the errors that humans make, which are . . .
Annie: Well, that is true.
Steven: . . . almost always worse. Just going back, though, to your earlier point about information that we weigh in making a decision versus how we combine it. But Tversky and Kahneman pointed out that the psychological basis for it, which is sometimes if there are a lot of factors and we’re considering them consciously, we’re using our System 2, we kind of get overwhelmed.
So instead of weighting them and aggregating them, one humongous sum of pluses and minuses, we might kind of consider them two at a time. And when you’re deciding between, you know, three cars or three universities or three romantic partners, and you say, well, it’s all too complicated. Let me just think of gas mileage. Then you say, well, actually I should think of the repair record. And when you shift in your criteria and you have a threshold where as long as it’s good enough, if it’s above, you know, 30 miles per gallon, then I’ll include it. Otherwise I’ll reject it. They show that that kind of thinking can lead to intransitivities. We all sense that, of course. Sometimes we make a hard decision, sometimes it’s intuitively obvious, like we just go for it. It’s just, as we say, a no-brainer, meaning a no System 2, because their brain really is involved.
But there are times when your head is always turned by the last thing you saw. Like the last argument you hear always sounds so convincing until you hear the next one. Or you find yourself kind of going around and around in circles that the more you think about it, you compare two things along different lines with each comparison.
And so you go around and around in circles, so that can happen. And all that process would be a violation of at least one criteria of rational decision-making, namely expected utility.
Annie: Yeah, gotcha. So, you mentioned Von Neumann and Morgenstern with game theory. I had it from George Dyson—it was Freeman Dyson’s son, and he found in Von Neumann’s personal belongings, what we would call a marker, which is a letter of credit for a local place where Von Neumann was playing poker, apparently famously quite badly, and losing money. But the theory of games was really based on a kind of simplified version of poker, for the reason that, as he was thinking about decision-making and game theory, he recognized that this issue of uncertainty, in particular hidden information, is incredibly important.
So unlike a game like chess, where it would go into the category of not a game in the academic sense, but a calculation, because you can see all the pieces and there’s no very strong influence of luck. Nobody’s rolling the dice. Poker is obviously in a different category because you can’t see all the pieces. I can’t see your cards, and I don’t know what cards are yet to come, which makes it a really interesting problem for decision-making.
Steven: Oh, absolutely. No, very interesting. And of course, von Neumann and Morgenstern, in the same book, really invented two amazing intellectual systems. The theory of expected utility, but as you mentioned, also game theory. Namely, what do you do when the outcomes, the payoffs, the utilities depend on what the other guy does? And he is making, you know, his or her decisions based on what you do. And amazingly they explore that in the same book.
Annie: Yeah, so you actually have a really great example in the book about climate change and negative externalities, sort of thinking through these kinds of problems. So I would love to sort of hear your thoughts on that example. Run us through that example and, you know, how we can think about game theory in order to get us to a more rational place and obviously these very complex problems.
Steven: Yeah. That discussion was partly inspired by a conversation that I had with a colleague who said, “We’re professors. We’ve got to just convince everyone it’s in their interest to cut back their emissions. Because who wants to live in a warmer world?” The problem is, though, that it actually isn’t in any individual’s interest to cut back on their emissions because then, you know, you’ve got to stand in the rain while someone else is driving in a nice comfy SUV. You’re sweltering in the summer and they crank up the air con and are relaxing in their sweaters in the middle of August. And example after example where since if you conserve, you’re not going to save the planet, but you’re going to bear all the costs. You’re going to be the one suffering. Whereas all those other people get to enjoy the advantages of cheap energy. So it’s only if everyone conserves at the same time that any of us are better off. So paradoxically, it’s irrational to conserve at an individual level, even though it is rational if the entire world does it.
That’s true both of individuals within a country, but also pointedly true of countries, in coming to terms with each other, right? India’s going to say, “Why should we forgo the benefits of cheap energy, whether or not we do it by ourselves, if China and the US are burning away? Then we’ll forgo affluence and the world won’t be any better off.” And of course everyone thinks that, so it’s a game-theoretic way of thinking.
You mentioned the term negative externality, which is another way of putting it. Externality and economics at play in the same situation, in which two agents engage in a transaction, but there’s a cost on a third party who doesn’t get the benefit of what either of the two of them are trading, in this case, the third party being humanity suffering from climate change from, you know—I buy a gallon of gasoline and it’s great. I get to drive, ExxonMobil gets my dollar. But then everyone else suffers, even though, you know, each of us is doing something that benefits us. But, you know, who pays for my emissions? That’s the economists who call it negative externality and you and I can call it a tragedy of the commons or prisoner’s dilemma or a multi-party prisoner’s dilemma.
Annie: Again, kind of going back to your previous work on the way that we can structure society to make things move forward. How would you think about when we have these sort of entanglements that make it really hard to get an individual to act in the best interest of society? How can we think about getting the actors involved to be more rational? Do you have thoughts about how to structure that?
Steven: Part of it is if there are binding agreements they can, and sometimes people have to work hard to wrap their heads around it, that actually can be to my advantage if I voluntarily tie my hands, as long as everyone else is voluntarily tying their hands too, then I’m actually better off tying my hands than if I could do what I want.
There are cases in which you could actually say, there ought to be a law against what I’m doing. And a simple example is I’m Canadian and so I grew up with hockey and so I like hockey examples. Hockey was much more fun to watch when none of the players wore helmets. You could see their hair and some of them would blow dry their hair backwards.
Annie: None of them had teeth!
Steven: No, none of them had teeth, right? Especially the goalies. But you know, none of them—they always had the option of wearing a helmet, but why would anyone wear a helmet if it impaired their vision? They were likely to overheat and they’d be kind of ceding an advantage to the guys in the other team who weren’t wearing helmets.
Now everyone may have wanted to wear a helmet because they didn’t want to get a fatal concussion. But no one could be the only one to do it if they would lose every game and they would not be playing in the first place. So a rule that said you had to wear helmets, even though it restricted everyone’s advantage, it was something that they all welcomed because it penalized their opponents too.
So part of it is if we have something like a carbon tax or binding—or actually they’re not binding—but agreements like the Paris Climate Accord, it kind of restricts our options because it restricts the other guy’s options too. We all have a chance of being better off. But even better—I mean, that works as far as it goes, but it is often the temptation to cheat if you don’t have referees, if you don’t have policemen, or to even stay out of the agreement in the first place, which is what happens when there’s no global enforcement, as there can’t be when it comes to nations bringing climate change. So an even better way to do it is to change the payoffs so that the expected utilities are different.
And you switch the game from a [inaudible] negative externality or a prisoners’ dilemma, to one in which the individual incentives are aligned with the collective incentives. That is, to use the jargon, you internalize the externality. Carbon taxes do that. But even better if technological . . .
“It is often the temptation to cheat if you don’t have referees, . . . so an even better way to do it is to change the payoffs so that the expected utilities are different. And you switch the game . . . to one in which the individual incentives are aligned with the collective incentives. That is, to use the jargon, you internalize the externality.” – Steven Pinker
Annie: Tax credits would do that as well, right? For the individual, a tax credit would do that.
Steven: Tax credits would do it, but then best of all would be if clean energy was cheaper than dirty energy.
Annie: Then you’re good.
Steven: Then you’re good. Then what’s individually rational is also collectively rational. And that’s why I think that our best hope of solving climate change is inexpensive, clean, abundant energy, where people acting selfishly also do what’s altruistic as a byproduct.
Annie: Well, I think what this brings up that I think is so insightful is that some of us intuitively feel, like, the sort of tut-tutting. You know—you shouldn’t do that because it’s not good for everybody. It’s going to get everybody to fall in line. Right? That, sort of in the same sense as a parent saying to a child, like, “That’s bad. Don’t do that. You know, say please and thank you.” And what you’re saying is, look, again, we have to accept the fact that, you know, individuals are going to try to act in what their rational, best interest is. They’re not necessarily going to think about what’s my best interest in the long run? They’re going to be thinking short run, you know, what’s happening in my lifetime? What’s good for me now? What’s going to make me richer now? What’s going to make me happier now? And that we need to accept that everybody is not going to sit into saying, altruism makes me really happy.
Steven: Yeah. In fact, that’s getting back to a realistic understanding of human nature. And there are some people who call themselves progressive who hate this way of thinking, like, you’re just not going to improve society if you are counting on everyone acting against their own interests. And you can say, oh, well, why not? Do we have to act in our own interests? You know, we don’t have to. But you know, realistically, people are going to. They might be a little bit altruistic and you know, a lot of us do make various little sacrifices, you know, we bring our own coffee cup instead of a paper cup and throwing it out. But they just don’t add up to saving the world, considering the massive amount of energy that everyone needs and wants.
And so it does come down to some virtue signaling, and think in scale . . . this is another thing, another theme that of course, you have to deal with as any student of rationality. Humans aren’t so good at conceding quantities, orders of magnitude. And so if you think that taking a shorter shower is going to save the world, you know, it’s a nice thing to do, but it’s just not going to add up. Even if everyone takes a little shorter shower, which they probably won’t, but even if they did a little bit. We need, you know—the gigatons of CO2 that we have to reduce aren’t going to be reduced that way.
Annie: And one of the things that I might actually add to that is that, you know, when we think about things like carbon tax or tax credits or just let’s get alternative energy to be cheaper, which would be on a global scale a really good thing to do, that even when you talk about, like, for example, here’s my [metal] bottle, you know—some of that is actually very similar in nature, which is, within certain groups of people, for me to signal belongingness to the group, it’s good for me to have my, you know, metal bottle. Right? And so in some sense there you’re still not relying on my own best ability to sort of figure out what is long-term best for the world. But there is now a norm, which is kind of like an agreement like the Paris Accord that, within my group, I should show up with this.
Steven: Yes, and, I think, I have my coffee cup.
Annie: Exactly. And I think that even there, we ought to concede that point that a lot of this has to do with group belongingness. And what are these agreements that we have among groups in terms of the decisions we make about our own behavior. So one thing that I just want to ask you about is, I think that there’s no doubt that right now, where we are in terms of our scientific sophistication, you know, evidence-based medicine, for example, kind of, you know, where AI might be going. That we’re in this place where in some ways it feels like humanity is the most rational, at least approaching it in this very structured, rational way. And yet, at the same time, it feels like we are also almost in some ways at the height of a rationality, in the sense of, you know, the spread . . . I mean, it’s not that there haven’t always been conspiracy theories, right? Obviously that’s always been true. But the spread and the vigor with which we believe in these things. So, how do you think about these conflicting trends? Like, on the one hand, it feels like humanity has been moving toward this space of being much more rational. But on the other hand, at least recently, it feels like we’re moving toward this space of being incredibly irrational.
Steven: Yeah, and that was a huge tension that I had to deal with in writing Rationality. Now I’d tell people I’m going to explain expected utility theory, game theory. And they would say, yeah, yeah, yeah, but how do you explain Q Anon? So I had no choice but to take that on. And, you’re right, then say I think there’s like this increase in rationality inequality.
Annie: Oh, that’s such a great way to put it.
Steven: You know, at the top end, there’s amazing rationality. And one prime example would be your collaborators, Phil Tetlock and Barbara Mellers, with their forecasting.
How do you make better predictions about what’s going to happen three years out compared to asking a bunch of pundits what they think? And then there are ways of doing it, which they document, but you’re, on the other hand, you do have crystal healing power and you’ve got, you know, election denial.
So I do think it’s, we do need institutions of liberal democracy, principles like freedom of speech and human rights. They were invented for a reason. They’re easy to slip away from because they’re not particularly intuitive, but it’s all the more reason to reaffirm them. And tools of rationality, such as expecting utility, Bayesian reasoning, acknowledgement of our biases, active open mindedness, and to cultivate habits of cognitive hygiene so that we don’t surrender to our System 1 instincts and try to prioritize expected utility, Bayesian reasoning, statistical estimation, and other cognitive tools. So not a—unfortunately, not a simple answer, but not a simple problem.
“We do need institutions of liberal democracy, principles like freedom of speech and human rights. They were invented for a reason. They’re easy to slip away from because they’re not particularly intuitive, but it’s all the more reason to reaffirm them. And tools of rationality, such as expecting utility, Bayesian reasoning, acknowledgement of our biases, active open mindedness, and to cultivate habits of cognitive hygiene.” – Steven Pinker
Annie: No, I don’t think so. And I would kind of add, as kind of an umbrella to that, that I think that humans in general are just really uncomfortable with explanations that rely on randomness. You alluded to that in terms of, you know, sort of how, is there a purpose to the universe? That, when the answer is: no, it’s just random—it doesn’t sit well with people. They want to know why.
Steven: It doesn’t sit well.
Annie: They want there to be a causal explanation, a purpose, a reason behind things. And I think that we can see this so well when people say it was meant to be. And it’s like, no, like you were randomly born at a symbol or time and ended up going to the same college and happened to be in the same class together. And they’re like, no! That can’t be possibly what happened.
So I think that when we think about conspiracy theories, some of it is that there’s a lot of random things that don’t actually have any tie together. And we’re just really uncomfortable with things happening, you know, randomly. For whatever reason. Does it mean that we don’t have control over our own life? Or are we built to see patterns?
Well, you know, the way that we perceive human faces would suggest that some of that is true. The way that we see a ball hitting another ball is a causal, in terms of the movement of the second ball, that we’re wired to see things this way. And that may be this sort of going kind of wild, like a crab that ends up with a claw that’s so big or a peacock’s tail. Just sort of adding my own thought onto that.
Steven: No, no, I think that’s very—that’s quite profound. Because you do hear people say it. I mean, Oprah said there are no coincidences. I don’t believe in coincidences.
Annie: Yes. And I think everything is a coincidence!
Steven: So, well, you of all people, obviously would appreciate that. Yes, absolutely. But it is a profound realization that people, you know, really ought to be more aware of. Not only are there coincidences, there are more coincidences. There’s so many logically possible coincidences that lots of coincidences are bound to happen. Especially if you identify them post hoc. Like maybe the car in front of you has a license plate number that has the last three digits of your phone number. Maybe your birthday happens to be the same as your, you know, your fiance’s. It’s not cosmic. It’s just there’s so many ways.
“It is a profound realization that . . . there’s so many logically possible coincidences that lots of coincidences are bound to happen . . . like maybe the car in front of you has a license plate number that has the last three digits of your phone number. Maybe your birthday happens to be the same as your, you know, your fiance’s. It’s not cosmic. It’s just there’s so many ways.” – Steven Pinker
Annie: There’s so many collisions that occur. They’re mostly going to be random and then you’re going to see a number that matches something that has to do with you. Exactly. So let me just ask you in terms of, you know, what we do at the Alliance, really trying to think about are we taking kids, you know, in K through 12 education and helping them to become more rational, helping them become better decision makers?
What is a decision? How do you make one? How do you think about the world? How do you think about data? What are the inputs, you know, how do you think about forecasting? For you, having really thought in this space for so long, what decision-making tool or idea or strategy would you want to most make sure is passed down to the next generation of decision makers?
Steven: Yeah, so I think there is one of them, and that’s why I have the book, Rationality’s seven middle chapters—each one is on a different tool. One of them is on game theory. One of them is on expected utility. One of them is on Bayesian reasoning. Of them, probability, and again I’m sure you agree with this. I think a sense of probability is right up there, including the prevalence of coincidences, especially if I identified post hoc. That’s got to be one of the places to start. Logic, though, is another one. And it can’t just be in school, although it ought to start in school. And the natural question always arises: well, if you’re going to have more probability theory in school, what are you going to subtract?
Annie: Trigonometry. I have the answer.
Steven: Yes. I couldn’t agree—and I thought, I can’t use that example anymore. I learned trigonometry. I’m glad I did learn trigonometry. Surely they don’t teach trigonometry anymore? But they do. They still teach trigonometry!
Annie: They do. Yes. They still teach trigonometry.
Steven: They still teach trigonometry. They still teach it. So I have nothing against trigonometry. I love trigonometry.
Annie: No. If you’re going to be an engineer, I would like you to eventually learn trigonometry, just FYI. But I don’t think you need to learn it when you’re 12.
Steven: It’s not as important as probability.
Annie: You know what I think is actually interesting on probability, you know, obviously an answer that’s near and dear to my heart, is that, you know, when we’re looking at these LLMs (Large Language Models) that are being trained on all of this data, the data being generated by humans, that one of the places that they’re very bad is on probability.
Steven: Oh, is that right?
Annie: Yes, you can talk to Gary Marcus, your own student.
Steven: Yes. My former student. Yes.
Annie: Yes. Who knows all about this. So they’re actually not very good on, on probability. Should that surprise us? Because humans aren’t particularly good on probability either. And it’s obviously being trained on those data sets that are from humans. So, understanding that the world is probabilistic, I mean, for me, when we’re thinking about expected utility, embedded in that is that we have to understand that the world is probabilistic, that there’s a set of outcomes that could occur, each of which has some probability associated with them.
Same thing with game theory. We have to be embracing in order to be a good game theorist. Bayesian thinking, well, I mean, that’s probability. When we think about rationality, probability is embedded in all of this as a through line through everything.
Steven: Yes. And it does lead, as you pointed out, to real human wisdom and not over attributing coincidences to underlying causal processes which don’t, in fact, exist. And living your life in full recognition of the risky nature, and the fact that our horizon of predictability is limited. They’re doing things that we can’t predict. That chance and contingency play enormous roles in our life that we ought to, if we’re wise, try to steer the odds. I mean, I feel silly saying this to you, I guess, since this is kind of what you do, but it isn’t just poker.
“Chance and contingency play enormous roles in our life that we ought to, if we’re wise, try to steer the odds.” – Steven Pinker
Annie: No, that is why I write about it as applying to everything.
Steven: Which you do, wisely. That the, you know, thinking in poker terms can lead to a happier and a more successful and satisfying and fulfilling life.
Annie: I think that’s absolutely true. Luckily, valuing and applying rationality is actually one of the Four Learning Domains of Decision Education, so we’re trying.
So just to finish up. First of all, this has been so amazing. I feel like, you know, I obviously met you so many years ago. And then in different ways our worlds have collided and kept us in touch, and it’s one of the luckiest things that has happened in my life. So I really appreciate you. If listeners want to go online and learn more about your work or follow you on social media, where should they start?
Steven: Oh, well, I actually did my own podcast series last year called Think with Pinker. It was on the BBC.
Annie: Love it.
Steven: And it is available to download, including your collaborators Phil Tetlock and Barbara Mellers, in separate episodes. I had conversations with them and with Daniel Kahneman, but also with people who apply principles of rationality in their everyday lives. So I have a conversation with Bill Gates on climate, with Hannah Fry, a mathematician. I had a conversation with Elizabeth Loftus on memory, with the judge Nancy Gertner on that eyewitness testimony in the courtroom, a conversation on investing with Charlie Munger, Warren Buffett’s right-hand man. So that was good fun.
Annie: Okay, well, everybody should definitely go listen to every episode of this podcast. That is a star-studded cast.
Steven: So that would be where to start. And I have, you know, lectures on my website and articles and then, you know, my books are where the story is told the most coherent way.
Annie: Awesome. So for any books that you mentioned today, or articles, your podcast, please everybody check out the show notes on the Alliance site. We’ll make sure that we link to all of those. And just as a final question, outside of your own, what book would you recommend to our listeners who are looking to improve their decision-making? You have to pick one. I’m going to make you pick one.
Steven: Oh, I have to pick one!
Annie: I know. It’s so hard.
Steven: Yeah. Well, the first one that comes to mind is Superforecasting by Philip Tetlock and Dan Gardner . . .
Annie: That’s the one I always pick.
Steven: Really? Although your own, I would have to add, and this is not just personal connection, but your own book would be up there as well.
Annie: Well, thank you. I always pick Superforecasting because I feel like at the heart of every decision is a forecast. And so if you don’t understand that fundamental piece, that you need to sort of unlock that, as you’re thinking about how to become a better decision maker.
Well, thank you so much, Steve, for joining us. I was so excited when I reached out to you and you said yes, you would come join and have this conversation with me. It has completely exceeded my expectations.
Steven: The pleasure is mine, thank you so much, Annie!
Annie: Thank you!