Subscribe to Annie's Substack

How Misinformation Exploits Our Biases

Q&A with Alex Edmans, London Business School finance professor and author of May Contain Lies: How Stories, Statistics, and Studies Exploit Our Biases – And What We Can Do about It

Alex Edmans is a Professor of Finance at London Business School, where his research interests are in corporate finance, responsible business, and behavioral finance. He is author of May Contain Lies: How Stories, Statistics, and Studies Exploit Our Biases – And What We Can Do about It , which just came out US May 14.

Thinking in Bets is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

I had a fantastic discussion with Alex about misinformation—the biases that make us susceptible to it, his classification of the errors we make in interpreting information, and things we can do to avoid being taken in.

The prevalence of misinformation and the futility of regulating producers

Annie: Obviously, there’s been a lot written about misinformation recently, particularly with the explosion of social media. But I feel like you are coming at this from a very interesting perspective, not just thinking about the production of misinformation, but also the way that we, as human beings, are wired to be susceptible to it, which is not something that I’ve seen people write about that much. How do you think about that interaction between the way that we’re wired cognitively and the information ecosystem that we live in?

Alex Edmans: Misinformation is everywhere, so it’s being produced faster than we can capture. When people think, “What’s the solution?”, they often believe it should be regulation. The government should prosecute misinformation, or professional bodies should kick out scientists doing fraudulent work, or publication authorities should stop publishing books full of misinformation and so on. But I think this is unrealistic, because there’s so much misinformation. You can’t have the government or professional bodies who are catching everything.

Instead, I’m turning it internally to look at ourselves. We fall for misinformation because of our biases, and I focus on two. The first is confirmation bias, which is that we latch onto something uncritically if we want it to be true and we reject something out of hand if we don’t like what it says. The second is black-and-white thinking, where we were predisposed to thinking that something is always good or always bad. We get swayed by extreme bits of information.

By turning it to our own biases, I’m trying highlight readers what they can do to make sure that they are not swayed by misinformation, rather than relying on the authorities to capture it, which I don’t think will happen.

The first bias: Confirmation bias (biased search and, especially, biased interpretation)

Annie: What you’re saying about confirmation bias is so interesting. I want to dig into that a little bit. As we’re coming across information in our lives, I think that when people are generally talking about confirmation bias, they tend to be thinking about what you call in the book “biased search.” Like, I’m watching news sources that I like and I’m not watching news sources that I don’t like. If I’m a Democrat, I’m watching MSNBC. If I’m a Republican, I’m watching Fox. When I’m on social media, I’m following people that I like and not following people that I don’t like. I’m seeking out information that’s going to confirm the things that I already believe.

But you’re digging deeper into that and talking about this other aspect of confirmation bias, “biased interpretation,” the way that we process information once we see it. Can you talk about that?

Alex Edmans: Certainly. The two types of confirmation bias are “biased search” and “biased interpretation.” Biased search is what people typically think about when they think about echo chambers. I’m only going to speak to fellow Republicans if I’m a Republican or fellow Democrats if I’m a Democrat. You might think, then, that a solution is a diversity of different opinions. Make sure that you are reading both sides of a debate. If you look at some newspapers, the Wall Street Journal, the New York Times, they might have a debate where there’s a pro and there’s an anti and that tries to give a balanced viewpoint, but that unfortunately doesn’t solve the problem if we have biased interpretation.

What do I mean by biased interpretation? Even if we see views on both sides, we will latch onto the view that we like and not question it. And for the view that we don’t like, we’ll apply a lot of scrutiny. We’ll be very asymmetric as to how we will scrutinize different viewpoints.

And why is that interesting? We often think that the solution to misinformation is to put all the facts on the table. If I’m right wing and you’re left wing, then, we will agree because we see the same information. But we won’t. I will latch onto something opposing gun control and you are going to latch onto something supporting gun control. Therefore, just making information available is not going to work because of biased interpretation and search. We need to think about our own biases when we look at the information.

Annie: The way that I describe it when I teach it is when you see something that agrees with you, you’re cool with it. When you see something that disagrees with you, if you read it at all, by the time you’re done you’ve written a dissertation about why it’s completely wrong. The methods are wrong. The way they’re thinking about it is wrong. They haven’t considered countervailing evidence or anything. It’s about the level of effort we we’re putting into disproving something we don’t like.

Your brain on information you like vs. information you don’t like

Annie: You also point out that there are things happening physiologically and neurologically that are giving us real emotional distress from reading or hearing information that we don’t like, and we get a real high when we see stuff that we do. You write about some great research in this area. Can you talk about what’s happening in our brains that’s reinforcing that cycle?

Alex Edmans: Absolutely. There are some great scientific studies where they use MRI scanners with participants to see what happens to brain activity when you see something you like or something you dislike. If you see something you dislike, the part of the brain activated is the amygdala. That is the part of the brain which leads to a fight-or-flight response. In contrast, if you see something you do like, the striatum is activated. That’s the dopamine-rich part of the brain, the same part of the brain activated when we have a nice meal or go for a run.

Why do I think this is so interesting? Because it applies to many of the types of misinformation that I highlight in the book, such as correlation versus causation or reasoning by anecdote. Conceptually, we all know that correlation is not causation, and we know that a single example could have been cherry picked out of thousands. And indeed, when we find a study we don’t like, we apply that skepticism. We say, “Is this correlation or is this causation? Did you hand pick that example?” But we suddenly forget all of that when we see something we do like.

The nice analogy here is to System 1 vs. System 2 thinking, described by the late Danny Kahneman. In the cold light of day, System 2 is in operation. But when our emotions are triggered, System 1, our irrational thought process, takes over. The first step is not to impart knowledge and to tell you loads of statistics, but to make you aware of your biases because often you have the knowledge within yourself. It’s just you’re not deploying it because you are being very selective about when you want to be discerning.

Annie: It feels like there’s a bad reinforcement cycle happening. If you see information that you like, it’s giving you a dopamine hit and we know that we really seek that out. That’s why we’re clicking on things all the time. If you see something that you don’t like, your amygdala lights up. You’re actually in distress now. As you’re seeking out information that then can reaffirm your beliefs, when you find that argument or you find that information that allows you to dismiss it, you get back to getting the dopamine hit. Is that fair?

Alex Edmans: That’s absolutely fair. This affects both biased search and biased interpretation. When I find something that I like, I get this dopamine hit. Because I have that dopamine hit, I forget to be discerning. I don’t ask the questions I would otherwise, like whether this is correlation or causation. I’m selective in how I deploy my knowledge. You might think the smarter you are, the more likely you are not to fall for misinformation. But the evidence suggests this is not true because you deploy your discernment selectively.

Smart, successful business leaders surely won’t fall for this, right? Wrong

Alex Edmans: And why do I think this is important? First, many readers might think, “I’m so smart. I’m not going to fall for bad cases of misinformation where I drink fish tank cleaner as a cure for coronavirus without checking.

Number two, you might think, “Even if the person on the street is fooled, the people who run companies and run governments, these are the people who are the smartest.” But this is not true. And I think people might have certain views as to presidents and CEOs not making correct decisions. Why? Even though they might be smart, the smarter you are, the more selective you might be in terms of dismissing stuff. Silicon Valley Bank, for example, their own models predicted there would be a huge potential failure if interest rates rose. What did they do? Because of their confirmation bias, they thought, their models must be wrong; we won’t ever fail. Let’s change the assumptions in the model to give the result that we wanted, which is that our bank is safe.

Same thing with Deepwater Horizon. BP ran three tests. Three tests suggested it was not safe to remove the drilling rig, but then they ran a different test which gave them the path they wanted. So they rationalized away the other tests.

Annie: Deepwater Horizon is the company where the BP spill occurred in the Gulf of Mexico. I think that’s so interesting that we’re so wired this way. When we have something that we wish to be true and the facts are contradicting, that instead of changing our belief, we’ll find a way around the facts. You gave two good examples of that, SVB being one. They literally said, “Our model must be wrong,” because it’s so clear they have to believe in the stability of their own bank. Then, with BP, can you just go into that in a bit more detail? The stakes there are incredibly high. It’s incredibly dangerous for the workers. And it was so surprising to me reading that example in your book, their ability to dismiss that evidence.

Alex Edmans: Absolutely. Deepwater Horizon was the drilling rig. It had drilled a well in the Macondo reservoir in the Gulf of Mexico. After you finish that, you need to remove the rig, but before doing that, you need to check that it’s safe. You run what’s known as a negative pressure test. In layman’s terms, you try to bleed the pressure all the way down to zero and you hope that it stays at zero. But they ran three tests and all three times it rebounded to a huge amount, suggesting that it wasn’t safe. But Deepwater Horizon was a very successful drilling rig. It had dug many other reservoirs and had a stellar safety record, seven years without a single lost-time incident. They were also under a lot of pressure because they were already six weeks behind schedule.

Rather than admitting to safety problems with the well, they thought the test couldn’t possibly have been right. They invented something they called the bladder effect to try to explain why the tests had failed. In my context, this would be like a student failing the exam and coming up with some convoluted reason to blame the instructor. They wanted to make up some reason as to why to ignore the tests. They made up a different type of test. They said, well, let’s run this test in a completely different way. That test gave a pass, which allowed them to remove the well and it sealed the well’s fate.

Afterwards, there was an official study into this disaster and the chief investigative counsel said that the idea of a bladder effect made no sense. No other sensible engineer had ever come up with it to explain a failed negative pressure test. It was something that they just invented at the time because, as you say, Annie, they knew the answer they wanted to get, so they reverse engineered an explanation to support that answer. They made up this bladder effect to explain the failed test.

Competing incentives

Annie: We have the intuition that when incentives align for you to get to the truth, you’re going to be more likely to get to the truth. In the BP situation, it feels like the incentives were aligned to get to the proof because, if it’s not true, the whole thing’s going to blow up. People are going to die, oil is going to spill all over the Gulf, and it’s going to be incredibly bad for BP’s reputation. With SVB, it’s obviously in their best interest not to have the bank fail, except that we have these competing interests: What are our beliefs? What is our identity? How are we feeling in the moment about whether we’re right or wrong? It feels like that ends up trumping what we think from an objective standpoint would be the incentives that would get you to align your behavior with the truth.

Alex Edmans: That is spot on. Now that you’re saying this, I wish I’d emphasized it a bit more in the book. Often, when we see a disaster happen, we blame the incentive. When banks fail, the popular narrative was that bank bonuses rewarded executives for failing. The reason I think biases are the heart of it is because if it was bad incentives, it would be easy to fix. You just change the incentives. But people often genuinely think they’re doing the right thing. They genuinely thought that Deepwater Horizon was safe, that they could remove the reservoir. You genuinely think that the test failed, and you come up with a different type of test, just like a parent might genuinely think that their child passed the test, and it was the teacher not being able to grade it correctly. I think that the key to look at here is that because we are biased, we are anchoring onto our preferred explanation or our preferred result, and we’re not responding to information in a rational way.

The smarter we are, the harder we fall (for motivated reasoning)

Annie: There were a couple of studies that you cite in the book that seem to fit into this model that we’re talking about. Our beliefs build out our identity, but not all beliefs are created equal. Some are more identity-defining than others. What we should then be able to predict is that where there’s a belief that doesn’t really define you, we should expect to get more rational behavior than for beliefs that are more defining.

There are two studies I would love for you to touch on here. First is relates to the intuition that we think that the more expertise we have or the more knowledge that we have on the topic, the more rational we’re going to be in interpreting confirming or disconfirming information. But the study that you cite shows that the more we know, the worse it gets for us. I’m thinking this also rolls back into this idea that the more we know, the more that those beliefs become part of who we are.

Alex Edmans: That study I described in the book, and this is on biased search, is by Charles Taber and Milton Lodge. The researchers, who knew the views of undergraduate participants on issues such as gun control, they asked them to research gun control. Importantly, they told them to research it in an even-handed way so that they could explain the issue to other people. They were given 16 sources of information, 8 from pro-gun-control sources, like Citizens Against Handguns, and 8 from sources against gun control, such as the Republican party. Out of those 16 potential sources, they could only select 8 bits of information.

They found, not surprisingly, that participants were choosing information that confirmed their prior belief. But that skew was even stronger for more knowledgeable, more intelligent people.

Why might this be? It might be that a more knowledgeable or intelligent person just believes that they are right. I believe that I’m right to be anti-gun control and therefore if I want to get informed, I want to read the anti-gun control ones because they’re the smart people, because I know that I’m right. Whereas when asking those who are less sophisticated or less knowledgeable, they might be more willing to doubt their initial belief and they’re more willing to be even handed.

Political beliefs and the role of identity

Annie: The second study is on political beliefs versus other types of beliefs and how that looks in the brain. We know that with politics, your political beliefs very much form your identity and in a way that many other types of beliefs don’t. Does the brain react the same way to political vs non-political things?

Alex Edmans: Yes, how do we make sure that we disentangle cultural identity from the actual message? There’s an interesting study on climate change. Climate change is often seen as a Democrat vs. Republican issue. If climate change is presented as a Democrat vs. Republican issue such as, say the documentary An Inconvenient Truth, about Al Gore, as a Republican, I might think, “I must oppose this irrespective of the evidence because otherwise I would be seen as a supporter of Al Gore.”

What might the solution to that be? Number one, it might be to highlight that climate change could be fully consistent with Republican values. If you say, “The solution to climate change is not heavy regulation or taxation, but innovation,” that’s something Republicans are more likely to support. Dan Kahan and colleagues found that right-wing participants were more willing to accept climate change as a serious threat if the remedy presented is geoengineering rather than regulation.

The second thing will be to ensure the message on climate change is given by people that Republicans are more likely to trust. Another Kahan experiment, about mandatory vaccinations against HPV, also involved a Democrat vs. Republican issue. They gave student participants information on both sides by fictional people, but they provided biographies, book titles, and pictures of the sources. Because of the biographies and the titles, it was easy to identify one person as being a Republican and one person as being a Democrat. If the climate action message was given by a person whose profile resembled a Republican, then the Republican subjects were more likely to moderate their views on climate change. This is why, again, despite An Inconvenient Truth being a great documentary, winning lots of awards, being full of information, the fact that it was about Al Gore immediately inoculated Republicans against being willing to believe that message.

Annie: It sounds like one of the things that you’re saying is that these identity issues are affecting our open-mindedness to information, and not just in a willful way. Neurologically, there’s all sorts of stuff happening that just doesn’t feel good when we receive messages that are attacking us. And you think one of the solutions is to have the message delivered by somebody you’re more likely to be more open-minded to, because they’re within your own tribe.

Alex Edmans: You’re absolutely right, Annie. It’s not that we’re bad people and that’s why we’re acting that way. It’s just if the message comes from somebody more familiar, we are more likely to listen to it, just like we trust our friends a bit more than somebody else. I don’t think this is necessarily rational, but it is a fact that we do have trust. These are human relationships, and if you have somebody of your same group, then you are less suspicious that they’re doing this for political reasons. They’re doing it because the information suggests that climate change is a really serious thing to try to address.

Black-and-white thinking

Annie: I think that’s a nice segue to the companion bias to comfirmation bias, which is black-and-white thinking. Can you explain where black-and-white thinking can bring us into the wrong place and why that makes us susceptible to misinformation. And, obviously related, why being smarter or more knowledgeable isn’t a shield and can, in fact, be a magnet for black-and-white thinking?

Alex Edmans: Let me start with what black-and-white thinking is. It’s the idea that something is always good or always bad. On the topic of sustainability, I might think not only that a moderately sustainable company is better than an unsustainable company, but that the more sustainable you are, the better your performance is, without limit. Also, anything that classifies as sustainable, be it climate change or biodiversity or DEI, all of those things will improve a company’s performance.

Why might knowledge or sophistication exacerbate black-and-white thinking? Well, I also say to myself, “As a finance professor, there are loads of things that I could have built my career on. Why did I end up focusing on sustainable finance? It must be that the evidence is uncontroversial. Therefore, if anything was to even suggest some nuances in whether sustainability pays off, I think that must be wrong. Sustainability must be a hundred percent correct because that’s why I’ve chosen to build my career around it.” This is why if something has a pro-sustainability message but suggests that after a certain point you are overinvesting, I am going to be anti that message, and latch onto something which says more is always better.

Annie: When I was reading this section on black-and-white thinking, I was thinking about ambiguity aversion. We don’t like to live in uncertainty. I was thinking about the example that you give about the Atkins diet. For people who don’t know, the Atkins diet is, literally, don’t eat any carbs. It’s been a wildly popular diet. But to your point, the message, “carbs are bad,” doesn’t represent the ambiguity and the complexity of carbs versus protein or fats, because it’s not true that carbs are bad, period. It’s much more nuanced than that. But it’s easier to send out this message.

Ambiguity aversion causes us to like these very clear-cut messages. And that creates, and correct me if I’m wrong, a vulnerability to those types of messages that then aligns the producers with an incentive to send out messages that are black and white.

Alex Edmans: Absolutely. We respond more naturally to something which is simple. The idea of, “Avoid all carbs,” that’s a simple diet to follow rather than “make sure carbs are between 30% to 50% of your daily calories.” The more nuanced idea would require me to count my carbs, my fats, and my proteins, and convert them to calories, taking into account the fact that they have different caloric contents. That would be complex. Also, something which was not black and white is saying that complex carbs are good and simple carbs are bad. I would then have to read not only the carbs label on some packet but break that down into complex versus simple carbs. The simpler the message that we give, the more likely it is to be received. If you are the producer of information, you want to give as simple a message as possible, “Avoid all carbs.”

Annie: In the political sphere, I feel like you see that so much in the messaging. When, for example, the two sides are “completely open the border” or “all immigration is bad,” that’s black and white. There’s so much nuance in between those two positions, but those two tend to be the positions that people take because they’re simpler.

Alex Edmans: Yes, absolutely.

The ladder of misinference

Annie: I want to start getting into a little bit about “the ladder of misinference,” starting down at the bottom and let you walk me through this. I want to start this off with an example that I saw that really made me upset. I read this article from a journalist that had gone viral. It said that feeding your kids fish makes them smarter. That was in the title of the article. The article went through how there was a study that showed that if you feed your kids fish, they’re smarter. I’m a scientist. I looked at the source material and it said nothing of the sort. What it said was that there was a correlation between eating fish, intelligence, and sleep in kids. All three of those things were correlated with each other. The researchers actually said in the paper that they couldn’t untangle what causes. They were just pointing out a correlation. I think I wrote a blog post at some point with 17 different titles of the article that you could have. But this I think goes to the lowest rung of the ladder, which is, “A statement is not fact.” Can you walk through the ladder a little bit?

Alex Edmans: Absolutely. What is the ladder? I wanted to categorize the different types of misinformation into four buckets. As you mentioned at the start, I’m not the only person to write about this. I’ve learned a lot from those other books, but some of them give you a huge laundry list of all different ways in which you might be misinformed, and that can be difficult for the reader to remember and to put into practice. This taxonomy, hopefully, is useful to the reader.

The first rung of the ladder: A statement is not fact

Alex Edmans: The first, basic level of misinformation is, “a statement is not fact.” People can just misquote something. This is not about statistics. It’s not about correlation vs. causation. It’s about whether you are being correctly quoted. As you’re mentioning, the journalist is quoting this study in saying something that the authors of the study never even said. Why can you get away with that?

People just like that message that eating fish makes your kids smarter. When people think that might be the case – fish that seems healthier compared to say ribs or something full of saturated fat – they’re not going to bother checking. They might see that they’ve referenced a study and say, “They have a study. Therefore, there must be evidence behind this.”

Annie: I would love it if if you could mention the way that you were used in this way, to your surprise.

Alex Edmans: That actually happened to me because there was a UK government inquiry into CEO pay. The government writes a report based on all the evidence which is submitted to the inquiry. When I read the final report, it said that CEOs don’t deserve their high pay because they have no effect on firm value. I thought that statement was odd, because all of the research I’m aware says that CEOs do add value.

I thought, who submitted this evidence, which got into the final report? I looked at the footnote: Professor Alex Edmans. I was shocked! I thought, did I just make a typo? Maybe I’d added the word “not” somewhere by mistake. But then I read my initial submission. I said nothing of the sort.

This is why these things are dangerous. People reading that would’ve seen the statement, the footnote, and my name. People will know that I work on executive pay. If I’m saying this, it must be authoritative. But that was never supported by my own evidence.

This is also an example of black-and-white thinking. Even if CEOs do make a difference to firm value, they could still be overpaid. It could be a CEO who adds $1 million to firm value but is paid $5 million. You don’t have to claim that CEOs add nothing in order to support the idea that CEOs are overpaid. But because that adds a little bit of a nuance, that doesn’t play into black-and-white thinking. The most unambiguous message that you can give is the one most likely to be picked up by readers. They said that CEOs make no difference. And that’s why we need to regulate.

Annie: That’s true of the study that I saw too. It’s a very simple message, “fish makes you smarter,” when the actual thing is there’s a correlation between the three things. We don’t know if any of these are causal to each other. It could be that rich people feed their kids fish or that rich people are able to allow their kids to sleep longer. There could be a billion different messages that you could have. Readers are going to respond more and click on it more if it’s boiled down into something that’s super simple. It’s interesting the way that plays into black-and-white thinking as well.

Alex Edmans: And the reader thinks, I’ve learned something. I can do something to make my kids smarter. I’m going to share this with my friends. This is practical. Whereas the correct interpretation that you mentioned doesn’t give me the same clear, actionable takeaway.

“Studies say ….”

Annie: It seems like one of the big problems with misinformation is the phrase, “Studies say ….” This idea of there’s some scientific study that gives it this veneer of truth. One of my favorite accounts on Twitter is @JustSaysInMice. It will tweet some article about alcohol consumption or fatty acids or sunlight or whatever, which is making these broad sweeping claims about humans. And then all that does is say “IN MICE.”

It’s a pretty big leap to take, from something in mice to something in humans, but  people like to take those leaps. It’s a simple message, it’s exciting, and so on and so forth. You obviously talk about it in the book. People just shouldn’t be quoting, “Studies say ….” If you don’t click through and read the study itself, what you don’t find out is that Alex Edmans never said that and, in fact, said the opposite.

Alex Edmans: The first rung of the ladder is just to check what was actually said.

The second rung: A fact is not data

Annie: That brings us to the next part of the ladder, “a fact is not data.” This is something that I think about a lot. The first line of defense that you have against misinformation is to fact check. When I saw that feeding your kids fish makes them smarter, I read the original article so I could fact check it. What’s so important about the second rung is that fact checking only gets you so far. A lot of pieces of information that we see will absolutely survive a fact check. Can you talk a little bit about that as a problem?

Alex Edmans: Certainly. When people think about misinformation, they think, the solution is to check the facts. Many people might say, I wouldn’t fall for the first rung of the ladder, but then this is where the other rungs come into play, Even if something is a hundred percent accurate, we may make misleading inferences from it.

This goes back to the start of our conversation about why I don’t think regulation can deal with this issue. Regulation can make sure that you say the truth, but even if something is true, there could be misleading inferences. Let’s meet this conflict with an example. Simon Sinek’s famous TED talk, “Start with Why,” has over 60 million views. He says Apple is successful because it started with why. Let’s say Apple did start with why, but how do we know that starting with why in general leads to success? There could have been hundreds of other companies that started with why and didn’t become successful, but Simon Sinek is never going to quote those examples because they’re not in accord with the message he wants to give. Just like, I’m sure you could find examples of people who became multimillionaires without going to university. But saying that not going to college makes you successful is a dangerous message. Those are just hand-picked cases.

Annie: So, the first problem in this would be reasoning by anecdote, by a single narrative. We are narrative thinkers, and there’s survivorship bias. If we only look at the people who succeeded or the companies that succeeded and tried to draw inferences from that, we’re not looking at the gazillion people who didn’t succeed. And if we don’t have both sides of that coin – in other words, whatever the control group would be – then we can’t generally draw the inferences that we want to from the data. I imagine that you must want to pluck your eyes out at all the case studies because it’s one of the favorite things in business. You have a case study of a company that’s published, and then people are trying to follow along with those practices.

Alex Edmans: It does. I don’t often teach with case studies. Why? Because if you want to illustrate a concept, you find the case study, the single example that illustrates it in the most stringent way. Let’s say you wanted to claim that democratic leadership, asking your employees what to do, succeeds. You want to find a case in which that does work. But there could be other cases where dictatorial leadership is better. Maybe if you need to move fast, you don’t want to build consensus because there’s not time. If you do, you’re going to completely be behind.

I still use real-life examples when I teach, but I’ll always make sure that they’re backed up by large-scale data. There’s a scientific study showing, say, that employee satisfaction leads to higher company performance. And here’s an example of a company, say Costco, which put this into practice.

The third rung: Data is not evidence

Annie: We know that the narrative itself grounds data so that it’s more memorable. But when someone’s only reasoning by the narrative, then that becomes problematic. The next rung is, “Data is not evidence.” Explain the difference between “data” and “evidence.”

Alex Edmans: When we’ve talked about the second rung, “a fact is not data,” you might think that the solution is get large amounts of data, not just one anecdote. However, that could just be a correlation. There could be multiple alternative explanations for that. Think about, what is evidence in a criminal trial? Evidence is information that points to one particular suspect but not to other suspects. If a piece of evidence was consistent with multiple suspects, that doesn’t get you any closer to the truth.

Annie: Using the criminal trial analogy, let’s imagine a world where DNA evidence was perfect. If the DNA evidence says that Alex did it, then that DNA is going to match to no other person on the planet. If DNA evidence were perfect or a lie detector was a hundred percent accurate, that would be the gold standard. We know that’s not true, but that would be the gold standard. We think about it as it confirms what I believe is true or points to a particular suspect, and we stop there. We forget about whether it supports other things that I could believe are true or that are possible as well. You gave such a great example where this really matters, about the amount of time spent behind bars by people who were later exonerated.

Alex Edmans: Yes, there’s a National Registry of Exonerations. They calculated that, in the US between 1989 and the present, 3,500 people later exonerated spent a total of 31,900 years in prison. It’s really saddening.

Annie: This really matters. We end up incarcerating people for the wrong reason.

We can also think about it from a (more low stakes) business standpoint. Maybe we have evidence that a particular strategy is working, but we don’t think about other explanations. We don’t understand whether that’s causal. We have tons of data, but we don’t actually know if it’s working. An example might be that we have some strategy to increase sales of some product we have that has vitamin D in it. Over the course of 2020, sales massively increase. We’ll use that as evidence that our strategy is working without looking at sales of vitamin D in general in 2020, when there happened to be covid and people thought that vitamin D might help. We don’t think about whether this supports something else, that it’s not our strategy that’s causing this, that there’s some other factor that we’re not considering.

It could be something as simple as that, but it could also be literally putting somebody to death.

Alex Edmans: What matters is not whether the information is consistent with the suspect, but if it’s consistent with other suspects, it is not evidence. Or, in a non-criminal setting, if it’s consistent with alternative explanations, it is not evidence. It doesn’t help me reach that particular conclusion.

The fourth rung: Evidence is not proof

Annie: We’ve gone through “a statement is not fact, “a fact is not data, and “data is not evidence.” The last rung on the misinference ladder is “evidence is not proof.” What is the distinction is between “data is not evidence” and “evidence or is not proof”?

Alex Edmans: “Data is not evidence” is a question of what people call “internal validity.” Is this showing something rock solid in the sample that you’ve chose to investigate? It could be that I have, within this particular sample, ruled out the alternative explanations, that eating fish leads to your kids being smarter. However, what was the sample that I studied? Maybe this was just American kids from middle-class families. It could well be that in other countries or for families of different wealth, there isn’t that link. Maybe it’s not that different types of food matter. If, for example, you’re a poor family, maybe just getting any types of calories matters more than fish versus meat. But if we take one particular study and we want our article to go viral, we want it to be as broadly applicable as possible, so we don’t talk about the context. We present this as a universal truth.

While I call this the difference between evidence and proof, a proof is universal. If I prove the formula for area of a square, that formula applies to every single square. In contrast, evidence is true only in the context specified. If there was a murder and a woman was killed and they found that her husband did it, that doesn’t mean that in all cases where a woman is killed, it’s also the husband who did it. All it means is in that particular case, the actual culprit was the husband. We don’t want to overgeneralize.

Annie: Just to ground this with something that people broadly believe to be true, you’ve used the example of the Marshmallow Test studies. In the Marshmallow Test, you put a marshmallow in front of a kid and tell them, “If you wait 15 minutes to eat it, you get a second marshmallow.” They were looking at what’s called delay of gratification or how willing you are to wait for a reward later.

Some kids were good at waiting and some kids were not so good at waiting. Then, they did some longitudinal work and found that the kids who waited longer did better in life: SAT scores, grades, career stability, in their families, so on and so forth. There were some very broad claims about willpower and how you do later in life. Is this a good example that you could go through to tell us the difference between evidence and proof?

Alex Edmans: Yes. This study has been extremely influential. They even had a Sesame Street episode on the importance of delaying gratification on the idea that patience leads you to being more successful later. This immediately plays into our confirmation bias. Why? Because we learn about the tortoise and the hare. We think about patience and restraint, but that study was specifically on Stanford University children. These were relatively affluent children. That might be a situation where, yes, you do want to be patient and you want to trust the experimenter that if you don’t eat the marshmallow now, you’re going to get two later. But if you are a kid from a less affluent background, the idea of trusting other people and waiting and delaying gratification, that might not lead to success. If you are in a home with little food and your sibling tells you, “Hey, give me this slice of bread and I’ll give you two later,” and if you trusted him all the time, there might not be any food the next day. You would be naive for doing that. In situations where you can trust people that if I’m going to delay gratification, I’ll be rewarded, yes, that leads to success. But to say in every setting that the kids who are trusting and not impulsive will always be more successful, that’s not going to be the case.

How do we fix it?

Annie: On a societal level, how do we fix this? To your point, regulation isn’t going to help because of the ladder of misinference. You can say that everything in the article has to be true. I don’t think that regulation would stop somebody from saying that feeding your kid fish makes them smarter. Those correlations were correct, within the body of the article.

So let’s focus on the consumer of the information. What can I, as an individual, do to make it so that I’m not as susceptible to this stuff?

Alex Edmans: I think it’s to recognize our biases and question the information. One useful tip is to imagine it found the opposite result. If you heard a result that conflicts with your viewpoint, how would you try to attack it? As soon as I’ve alerted myself that there are other potential explanations, I can ask myself, are there other potential explanations, even though this is in the direction that I want?

Annie: That seems connected to the idea that when you see information, you should stop and ask, what are the other things that could be true given what I’m reading?

Alex Edmans: Correct, try to find out what other things could be true. And if you’re unable to come up with them, imagining the opposite result is a way to get at them. Imagining the opposite triggers me. It’s a useful cognitive device to encourage me to be discerning and not to accept it at face value.

Annie: Would you also recommend that, in general, you should be very skeptical of broad, sweeping claims, especially when the producer is telling you, “Studies say …”?

Alex Edmans: I think the bolder the claim in terms of magnitudes or in terms of universality, the more skeptical you should be.

What do I mean by universality? If they say, “Red wine always causes you to live longer,” rather than, “Red wine is good for 40-year-old men who also exercise,” that is universality. Because most studies in scientific journals are very careful about where they apply and where they don’t, if something is sweeping, it’s probably not published in the top journal to begin with, or the journalist has over extrapolated from that. Also, in terms of the magnitude, there are certain studies which claim that if you were to wake up before 5:00 AM productivity goes up tenfold. But the larger you find the effect, the more skeptical we would be. If there was an easy way to increase your productivity by 10 times, everybody would already be doing it. If instead it said waking up at 5:00 AM increases your productivity by 5%, I would be more likely to believe that because there would be people who say, “Even though I know I would increase my productivity by 5%, I’m still not going to do it because I don’t want to get up that early.” Something like that where there is a reasonable trade off is more plausible than something which is going to increase your productivity 10 times.

Annie: Yeah, I like both of those. The broader the claim, the more universal the claim as well. I also love what you just added, which is the greater the magnitude, the more you should be skeptical of it. And what you said is so important to repeat: If it were that easy, people would already be doing it.

I was just so deep into your book. I was like, this is amazing. It was wonderful talking with you.

Thinking in Bets is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.