Subscribe to Annie's Substack

ARTICLE: When mistaken beliefs lead to bad bets on data

To win a poker game, you have to work with whatever data you have, however limited it may be, and then make the best choices possible. Annie Duke, decision strategist and former championship poker player, explains how poker has useful lessons for any IT decision-maker.

Not long ago, Annie Duke was considered one of the world’s top poker players. In 2004, she vanquished a field of 234 players to nab her first World Series of Poker (WSOP) bracelet. A few months later, the now-retired Duke won $2 million in a winner-take-all WSOP Tournament of Champions. Known as much for her wit and style as her strong strategy, she easily was among the most recognizable faces on the poker circuit.

But many fans didn’t know that Duke came into the sport of poker with an unusual competitive edge: a background in cognitive psychology from the University of Pennsylvania.

Duke had planned to become a professor exploring the mysteries of human behavior and decision-making, but a hospital stay derailed those plans. While seeking work, she fell into playing poker. Duke started winning, largely because of an ability to quickly size up opponents and make rapid-fire bets, bluffs, and folds based on her observations of behavioral patterns. Now, Duke spends her time writing, coaching, and speaking on a range of topics such as decision fitness, emotional control, productive decision groups, and embracing uncertainty. She is a regularly sought-after public speaker, having recently keynoted at the Gartner Data and Analytics Summit in Orlando, Florida, as well as many other business, university, and public events.

We asked Duke how business and IT leaders use—and abuse—data in decision-making and about the mistakes they make over and over again. In fact, that’s the subject of her latest book, “Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts.”

You must have observed all kinds of behaviors sitting across tables from the world’s top poker players. What stood out the most as it relates to your current decision-fitness platform?

Any freshman in college taking an introduction to psychology class can tell you that learning occurs when there’s lots and lots of feedback tied closely in time to decisions and actions. That’s how it is in poker.

There are few human activities in which feedback comes more quickly than in poker. Often, within 30 seconds, you bet, somebody does something, you do something else, and so on. Within two minutes, you find out whether you’ve won or lost the hand, and it’s this tight and closed feedback loop.

And yet, what I saw was that many players were actually not really learning. Despite the fact that there was plenty of experience at their fingertips and lots of data to work with in one of the most data rich environments you could imagine, they were making the same mistakes over and over again. This was a really big puzzle to me and one that I started to dive deeper into when I first started consulting on decision-making in 2002.

What have you learned about how business and IT leaders use or disregard data in decision-making?

I’ve found people tend to think in extremes.

Some say to themselves, “I have all this experience and it’s particular to me, and I make these decisions because I can somehow see things that others can’t see.” I think there’s a particular leadership style that can go in that direction where there’s a real rejection and distrust of data.

And then it can really go in the other direction where business and IT leaders think data is absolute proof. And the goal would be to completely take humans out of the decision-making process and just have the data tell you what is true.

Both of these approaches are actually an equal error. These extremes tend to be like the goal or the effort that companies work toward. But in reality, what you want to do is marry the two. You want a really good interaction between human beings and data and technology.

Why do you say that?

Well, first of all, the problem with just believing that data is truth is that it takes human beings to collect the data. You have to figure out what datasets you’re going to pull relevant information from. What questions you should be asking about it. How you are going to frame those questions. What kind of analysis you need to run on it.

Then there are certain things that human beings bring to the table that machines aren’t particularly good at like, for example, common sense. We’re also really good at resolving ambiguity. If you say, “The elephant is on the dog,” every human thinks the same thing: There is either something weird about the dog (it’s squished!) or something weird about the elephant (it’s a toy). Humans can resolve what that means right way, but algorithms aren’t so good at that.

Innovation in your inbox. Sign up for the weekly newsletter.SUBSCRIBE NOW

The third thing to think about in the interaction between human beings and data is that data can be a way to essentially manage career risk. We tell ourselves, “If I believe that the data is true, then I’m going to make a decision based on that data. And if my decision doesn’t work out, then I can kind of throw my hands up and say, ‘Well, this wasn’t my fault because it was in the data.'”

We have to watch out for those kinds of traps and think about how we can create really good interactions between human beings, data, and technology. Where we’re interacting in a way that pulls actual truth and accuracy out as opposed to some of those other things that can cause distortions.

With your cognitive science background, you’ve given thought to how belief systems sometimes cloud data interpretation. How do false beliefs take us down the wrong decision-making paths?

Here’s the issue: We think we approach data in an objective way, completely open-minded to whatever it’s going to tell us. We believe we use that data to form new beliefs or to calibrate beliefs we already have. Some new piece of information comes in and we say, “Oh, that’s very interesting, and I’m going to parse that.”

But actually, that’s not at all how we handle data. What really happens is that we tend to already have very strongly held beliefs about the world. Those beliefs get woven into our identity. Our cognition is very driven to protect our identity and thus our beliefs. And we cling to our beliefs as if anything else would undermine who we are as individuals or leaders in our fields. Even when presented with an objective truth, it’s hard for us to change our beliefs. Many of us think one human year equals seven years for a dog. Not true. We’ve all heard and most believe immigrant names were changed at Ellis Island. Not true. People think that if you want to know if a man will go bald, you need only look at the maternal grandfather. Not true.

Once we’ve got these beliefs in place, we often notice and seek out evidence to confirm the beliefs we have. This is known as confirmation bias. As inaccurate as our beliefs might be, they are in the driver’s seat, steering how we process information guiding many of our decisions.

You sometimes talk about how data-driven decisions that don’t work out may not be criticized if you’re transparent about why you made them and how opaque decisions—made in a vacuum—have the opposite effect. Can you elaborate on that?

Yes, absolutely. Sometimes we equate the quality of a decision with the quality of an outcome, which is a fallacy called resulting. Was a result good or bad? Did we win or lose? You use that as if it’s a perfect signal for decision quality. If we had a bad outcome, it must have been a bad decision. If we had a good outcome, it must have been a good decision.

The reason we do that is because it helps us with complexity. It reduces complexity for us cognitively. It can be difficult to look back and analyze decisions that led to success or failure. Resulting is a simplifier. It reduces the cognitive load.

But sometimes we don’t use the quality of the outcome to derive the quality of the decision. Sometimes we allow that it might have been bad luck. We do that when there is a lot of consensus around the decision, when it feels transparently good.

There is a big difference in the way we treat outcomes when the decision feels agreed upon versus when it doesn’t. If you end up with a good outcome from a consensus decision, people say, “Good job.” If you have a bad outcome from a consensus decision, they might say, “Oh, that was bad luck.” It’s like if I go through a traffic light and the light is green and I get into an accident, no one’s mad at me. It is a status quo decision, and it’s transparently good, because society has worked out the cost and benefit of that.

Conversely, when you’re in the opaque non-consensus category, and maybe you’re innovating based on data and you have a good outcome, you’re called a genius. Everyone says you’re amazing. But if you have a bad outcome, you’re an idiot. You made the worst decision anybody’s ever made. They can’t believe you did that, even when reasonable data supported your decision.

The problem with all of this is that people want to avoid the opaque, bad outcome category because they don’t want to get called an idiot. So they seek false consensus and then use data as a shield as opposed to using data to find the truth.

You’ve got to know even great decisions don’t always yield the best outcomes. There’s always an element of luck or uncertainty you cannot control. I experienced and witnessed a lot of that in poker. And you see it all the time in business and IT. And that’s OK so long as you account for it.

Given how data-driven decisions can go sideways, what’s your guidance for getting them on track?

One of the first things you want to do is allow people on your team to form their own narratives or viewpoints—separately. Then bring them back together after they’ve had time to form their opinions to let them breathe. You want that diversity of opinion about what the data is really telling you. If you have five colleagues looking at the data and they all independently think it says the same thing, your confidence in that interpretation should be higher than if they all came to different conclusions.

Also, when you do see disagreement, you can start to get them to think from outside of their own beliefs. Challenge them to switch sides of the argument to consider that different perspective.

You can also form a red team (a group that challenges an organization to improve its effectiveness by assuming an adversarial role or point of view) to go off and debate why preliminary conclusions might be wrong. Building these processes into your team makes it much more likely you’ll catch errors and see where you may need to calibrate opinions. Again, what we’re trying to do is not get caught up in our own beliefs so much that they are the only thing driving our decisions.

Your book title leads off with the phrase “Thinking in Bets.” Why that title?

A decision is a bet on a particular future. When we think about a decision as a bet, it gets us to this question of, “Do you want to bet on that belief that you have?” That naturally causes us to go through a series of other questions, such as “Where and when did you form the belief?” “What evidence do you have for the belief?” “How reliable is the source of the evidence?” “What does the person challenging me to bet—maybe my competitor—know that I don’t know?”

Basically, this process forces you to do an internal audit of your own belief system, and it also causes you to move to an outside view—to get outside of your own belief system—and ask, “How would somebody else view this decision?”

Scottish physicist James Clerk Maxwell once said, “Thoroughly conscious ignorance is the prelude to every real advance in science,” to imply that sometimes the questions are more important than the outcomes. Evidence suggests that is very true.