Subscribe to Annie's Substack

“Accentuate The Positive” — Annie’s Newsletter, October 19, 2018

In this week's newsletter: 1. "ACCENTUATE THE POSITIVE" - What biases in the reporting of drug studies show us about our own decision-making 2. "DIFFERENT KINDS OF SMART" - We're fortunate if we're one kind of smart but that means there is still a lot we don't know 3. WHAT'S LUCK GOT TO DO WITH IT? - If "it" is wealth distribution, luck has plenty to do with it just like it does in all our outcomes 4. (A SYLLOGISM:) ALL PEOPLE HAVE COGNITIVE BIASES - Some cognitive biases involve motivated reasoning - Some people engage in motivated reasoning 5. WHY EXTREMISTS AND SCAMMERS THRIVE ON SOCIAL MEDIA - And why banning individual accounts won't accomplish much

In this week’s newsletter:

1. “ACCENTUATE THE POSITIVE” – What biases in the reporting of drug studies show us about our own decision-making

2. “DIFFERENT KINDS OF SMART” – We’re fortunate if we’re one kind of smart but that means there is still a lot we don’t know

3. WHAT’S LUCK GOT TO DO WITH IT? – If “it” is wealth distribution, luck has plenty to do with it just like it does in all our outcomes

4. (A SYLLOGISM:) ALL PEOPLE HAVE COGNITIVE BIASES – Some cognitive biases involve motivated reasoning – Some people engage in motivated reasoning

5. WHY EXTREMISTS AND SCAMMERS THRIVE ON SOCIAL MEDIA – And why banning individual accounts won’t accomplish much


“ACCENTUATE THE POSITIVE”
That’s apparently our motto in life … and in drug research
What biases in the reporting of drug studies show us about our own decision-making

Aaron Carroll offers a fabulous read, “Congratulations. Your Study Went Nowhere”, his latest piece for the New York Times.

The piece describes a set of biases in research that lead to an over-reporting of positive results and under-reporting of negative results in drug research.

Looking at a sample of 105 studies on anti-depressants in which half the studies showed positive results and half negative, the biases become clear:

  • Publication bias – Researchers published 98% of the positive results, but just 48% of the negative results.
  • Outcome reporting bias – In 10 of the 25 negative studies published, researchers reported them as positive by focusing on a secondary outcome.
  • Spin – Of the remaining 15, 11 spun the results to sound positive, reporting results numerically or pointing to trends in the data without mentioning a lack of statistical significance.

The bias toward accentuating positive results has led to a big gap between research results and published results.

Carroll’s analysis is excellent, and he raises important concerns about how this is bad for science and bad for all the groups that make decisions based on research that doesn’t accurately represent the results in the field – other researchers, health professionals, patients, policymakers, and the public.

Of course, these biases aren’t confined to drug research. This is fundamentally about how our brains work.

We all amplify our positive results and downplay our negative results.

I found a lot of relatable themes in his analysis, starting with his opening insight about conflicts of interest:

“When we think of biases in research, the one that most often makes the news is a researcher’s financial conflict of interest. But another bias, one possibly even more pernicious, is how research is published and used in supporting future work.”

That’s a great point and, as I mentioned in Thinking in Bets, reminds us of the conflict of interest we all carry with us:

“We tend to think about conflicts of interest in the financial sense … but conflicts of interest come in many flavors. Our brains have built-in conflicts of interest, interpreting the world around us to confirm our beliefs, to avoid having to admit ignorance or error, to take credit for good results following our decisions, to find reasons bad results following our decisions were due to factors outside our control, to compare well with our peers, and to live in a world where the way things turn out makes sense.”

Aaron Carroll didn’t accuse researchers of wrongdoing or bad faith. The researchers are just humans and humans are subject to biases.

We’re subject to the same kind of biases. They affect how we interpret the world, and we need to be on the lookout for how to address them.

Consider how the same kinds of biases manifest in our own lives.

We’re also much more likely to publish our good results, broadcasting our good outcomes and downplaying the not-so-good ones. I can’t tell you how many times in poker someone came up to me and said, “I play in a local game and I win 95% of the time.”

The best player in the world, against the worst players in the world, wouldn’t win 95% of the time. That’s clearly some crazy mental accounting.

But while that might be an extreme case of broadcasting your wins and keeping your losses to yourself, we all do this to some extent.

Once again, as I said in Thinking in Bets (referring to something Jon Haidt said), “we are all our own best PR agents, spinning a narrative that shines the most flattering light on us.”

Accentuating the positive is the natural way we think. It helps us create a positive self-narrative.

But there’s a lot truth to be found in negative results. If we bury those results, or present them as insignificant, or spin them as positive, we lose the ability to find that truth.

That truth is how we learn from experience.

When we’re counting on others to help us, their advice is going to be of much higher fidelity if we allow them to see us accurately. That means pulling back the curtain on our mistakes and our failures.

Read Aaron Carroll’s article, not just for the important points he makes about the scientific process, but for how that ties into finding flaws and making improvement in our own decision processes.


“DIFFERENT KINDS OF SMART”
We’re fortunate if we’re one kind of smart …
But that means there is still a lot we don’t know

I learn something from every piece Morgan Housel writes and his recent “Different Kinds of Smart” does not disappoint in this regard.

We tend to think of smart as academic smart: top grades, top universities, an impressive record of scholastic achievements. But that’s not the only kind of smart we need for success.

That’s why it’s important to develop other kinds of smart and surround ourselves with people who are other kinds of smart.

Housel offers examples of the kind of smart we want to be and cultivate around us, including advice on how to form a good decision group.

Here’s two of them:

1. Whatever our expertise, we should account for all the things we don’t know. 

He gives the example of expertise in economics, obviously an important kind of smart in investing:

Being an expert in economics would help you understand the world if the world were governed purely by economics. But it’s not. It’s governed by economics, psychology, sociology, physics, politics, physiology, ecology, and on and on.

Become as much of a polymath as you can and seek counsel from those with different perspectives and expertise. Access those who see the world differently.

2. Balance confidence and boldness with recognition of our inevitable limitations: uncertainty, risk of ruin, the importance of accessing knowledge and viewpoints of others, and humility in the face of the difficulty and complexity of our task. 

I love Housel’s take on humility:

“Humility not in the idea that you could be wrong, but given how little of the world you’ve experienced you likely are wrong, especially in knowing how other people think and make decisions.”

Don’t confuse confidence or humility with certainty. Being humble in the face of uncertainty is not only realistic, but it also encourages us to seek other opinions and think about the things that we don’t know.

Humility is realizing that life is hard. Confidence is about thinking we can do our personal best, or do this better than others, or learn to improve.

Be confident and acknowledge uncertainty. There are no sure things and we can’t control every variable. We can lose ten coin flips in a row.

Embracing uncertainty and incorporating it into our decision process gives us a more accurate view of the future and gives us a chance to protect ourselves from ruin.

Confidence doesn’t mean we possess certainty. If we think it does, that can lead us to overplay our hand and go broke.

It’s good to go for success with intensity, but if we lose – either because we’re wrong or simply due to uncertainty – we shouldn’t put ourselves in a position where we’re out of the game forever.

The article is succinct, and there are three additional pieces of excellent advice. Check it out.


WHAT’S LUCK GOT TO DO WITH IT?
If “it” is wealth distribution, luck has plenty to do with it …
Just like it does in all our outcomes

On Twitter, Peter Wung pointed me to a Nautilus article he thought would interest me, “Investing Is More Luck Than Talent” by finance professor Moshe Levy. He was right.

Levy’s article points out the inevitable role of luck in creating inequality in wealth distribution. Using several examples, including simulations based purely on luck as well as simulations based on a combination of luck and skill, luck plays a significant role in outcomes.

I don’t think such acknowledgement detracts from the role of talent and ability in success, or the qualities that helped the world’s most successful people achieve their success.

Quite the opposite, it would be unrealistic and uncharacteristic of our world if success were based on skill alone. In nearly every kind of decision, our choice doesn’t guarantee an outcome.

The skill is in improving the chances of a good outcome. But that outcome is never guaranteed.

I’ve written about the issue before, in several newsletters and in an article on Smerconish.com, “How Much Could Luck Have To Do With Income Inequality?”

I shared with Peter a simulation described in DecisionScienceNews.com about the counterintuitive result of an extended give-a-dollar-take-a-dollar game.

(That’s a game where you imagine a room of 45 people with $45. Every minute, each person gives $1 to someone at random.)

It’s all luck, right? A person’s chances, every minute, of gaining a dollar are exactly the same as their chances of losing a dollar.

Counterintuitively, there are gigantic differences in wealth distribution after 5,000 trials. In one run of the simulation, one of the players ended up with over $200, and the top three ended up with 25% of all the money.

Peter Wung agreed with me (and DecisionScienceNews.com): “That is totally counter intuitive.”

It’s not that surprising a result that somebody ran away with it.

What would be surprising is if the SAME player ran away with it every time. Of course, that’s not what happens.

Separating luck from skill is hard. The first step is acknowledging that both influence all results, not just luck in the bad ones and skill in the good ones.


ALL PEOPLE HAVE COGNITIVE BIASES
Some cognitive biases involve motivated reasoning
Some people engage in motivated reasoning

We have a lot of trouble reasoning logically when the conclusion of an argument agrees with something we think is true.

One of the ways we can see that is with syllogisms. Because syllogisms deal with just internal logic, external beliefs about the premises and conclusion of the syllogism are irrelevant.

A syllogism takes the form of two premises followed by a conclusion. It only asks whether, given the premises, the conclusion follows.

Here’s an example:
(a) all mammals are dogs
(b) all cats are mammals
(c) all cats are dogs

That syllogism is valid, meaning it is internally logical, even though we have real world knowledge that all cats are not, in fact, dogs.

The syllogism isn’t supposed to test whether a conclusion is true against the real world, just whether it is true assuming the truth of the premises.

We know that people have trouble identifying whether a syllogism is valid when the conclusion conflicts with real world knowledge, like our knowledge about whether cats are dogs. What we know to be factual interferes with our ability to evaluate the internal logic.

But what about things that aren’t exactly facts but, instead, are things that we have a strong belief about and would prefer that it were true? What if we wanted the conclusion to be true (or untrue)?

Does that make it harder to spot the logical flaws?

Gurwinder, who I follow in Twitter, brought to my attention a recent study on “my-side bias” that shows that strong political beliefs do interfere with our ability to parse an argument.

The study, summarized in the British Psychological Society’s Research Digest, was conducted by Vladimira Cavojova and colleagues at the Slovak Academy of Sciences. They tested subjects’ ability to figure out whether several syllogisms were valid or invalid.

Some of the syllogisms involved neutral items (like qualities of animals), and some involved abortion-related items. They screened the subjects to identify those who were pro-life and pro-choice, to see if those external beliefs affected their ability to evaluate internal logic.

Here are examples from the study of a valid and invalid pro-life syllogism:

All fetuses should be protected.
Some fetuses are human beings.
Some human beings should be protected.
(Internally valid)

All fetuses are human beings.
Some human beings should be protected.
Some of those who should be protected are fetuses.
(Internally invalid)

They also included valid and invalid pro-choice syllogisms.

What they predicted was that people who had a pro-life view would have trouble identifying the internal invalidity of the syllogism that confirmed their belief. Likewise, they would have difficulty spotting the correct logic in a pro-choice syllogism that included assumptions clashing with their beliefs. (And vice versa for the pro-choice subjects.)

They found exactly what they predicted.

“Mainly, the participants had trouble accepting as logical those valid syllogisms that contradicted their existing beliefs, and similarly they found it difficult to reject as illogical those invalid syllogisms that conformed with their beliefs. …. What’s more, this ‘my-side bias’ was actually greater among participants with prior experience or training in logic”.

It’s hard to parse an argument when you have a conclusion already in mind, whether it’s something as generic about whether cats are dogs, or if it has to do with something that’s highly charged and political.

It’s a good demonstration of motivated reasoning. We can’t get to the internal logic when we want the conclusion to be true or false.

This is a real problem. We have access to all this information at our fingertips but our beliefs are in the driver’s seat when it comes to evaluating the information.

As we’re grappling with fake news we know that fake news is going to be effective when it fits the view we already have, when it supports a narrative we want to believe.

And we’re not able to deploy the skills we have to spot flaws when we already agree with the world-view.

And that’s a big deal.


WHY EXTREMISTS AND SCAMMERS THRIVE ON SOCIAL MEDIA
And why banning individual accounts won’t accomplish much

Vox recently posted an interesting video, “Why every social media site is a dumpster fire,” explaining how the recent actions banning individual trolls, scammers, and fake-news creators will likely accomplish very little.

Carlos Maza, who produced and appears in the video and wrote the short piece accompanying it, interviewed Jay Van Bavel on the underlying reasons why social-media sites end up attracting and rewarding extremists and crackpots.

Van Bavel’s work supports the view that the reasons why trolls succeed are fundamental to human nature. We’re tribal and our attention is attracted to emotionally strong items.

In combination, those features make social media a place where extreme, provocative views flourish.

Technically, the whole world could be a place where such views flourish, but we have social customs and rules/laws discouraging people from saying hateful, ridiculous things in traditional interactions.

While we’re getting our mail, our neighbor isn’t going to start screaming, “The survivors of the latest mass shooting is a false flag! The victims were paid actors, hired by gun-hating liberals!”

But the internet has the shield of anonymity, and social media sites attract extremists and scammers. And it doesn’t (yet?) have the social customs or rules that keep people from behaving that way.

It’s a good interview and a good video on an important observation: if we’re tribal (which we are) and we’re attractive to emotional appeals (which we are), deleting the accounts of individual extremists and scammers won’t make the problem go away.