Subscribe to Annie's Substack

“The Wisdom of McRibs” — Annie’s Newsletter, November 16, 2018

THE WISDOM OF McRIBS - Nick Maggiulli and our tendency to infer causality from coincidence CROWDS AS DECISION TOOLS - The power of a large, independent sample BELIEVE RESPONSIBLY - William Kingdon Clifford and the universal moral responsibility for calibrated beliefs "THEY RUINED POPCORN!" – Cass Sunstein and the value of information and willingness-to-pay HOW MUCH, THEN, IS IGNORANCE WORTH? - How and why we'll PAY to avoid partisan information

THE WISDOM OF McRIBS

Nick Maggiulli breaks down our tendency to infer causality from coincidence

Nick Maggiulli’s typically outstanding Of Dollars and Data blog recently included a piece titled, “The McRib Effect: Why Understanding Causality is So Difficult.”

We think when we see that two things have occurred together in the past, we see a pattern and infer a causal link.

Maggiulli offers a beautiful (and funny!) example of our tendency to over-attribute causation by looking at “The McRib Effect.” Historically, the periodic availability of the McRib and S&P 500 gains are positively correlated:

To be fair, most of us probably aren’t going to use this correlation to load up on pork-inspired mutual funds.

But we do plenty of things just as silly, if only a bit less absurd.

Like this piece from CNN.com, which took a study showing a positive correlation between IQ, sleep, and fish consumption and managed to come up with this headline:

“Eating fish improves kids’ IQ scores and sleep, study says”

The beauty of the McRib illustration is that the idea that McRib availability is somehow causing the market to go up is absurd enough on its face that no reasonable person would think the relationship is causal. (And the piece deftly demonstrates that the McRib-Bull market is, indeed, a coincidence.)

But when a correlation makes more sense to us, like that fish consumption raises IQ and makes you sleep better, all hell breaks loose. It is harder for us to overcome our bias for overattribution of causation.

As Maggiulli says so nicely, “We all want a simple answer though the truth is usually far messier.”


CROWDS AS DECISION TOOLS:

Wisdom v. mob rule?
The power of a large, independent sample

James Surowieki’s The Wisdom of Crowds starts with a famous anecdote about crowdsourcing the weight of an ox.

In 1906, British scientist and eugenicist Francis Galton watched 800 people buy tickets to guess the weight of a fat ox. After the contest, Galton collected the tickets, expecting to prove through this impromptu experiment that a collective guess would be far inferior to asking an expert.

What he found was that while any individual guess was, indeed, likely to be way off, the average of all those not-so-good guesses (1,197 pounds) was within one pound of the ox’s actual weight!

Corporate consultant John D. Cook, in a 2014 blog I recently read, builds off Galton’s finding about the power of the crowd:

“Suppose there are 10,000 people, each with a 51% chance of answering a question correctly. The probability that more than 5,000 people will be right is about 98%.”

Any one person in a group might be only slightly better than random to give you the right answer. But if you ask each person independently, polling the whole group, the chances you get the right answer approaches 100%.

There is a lot of power in a large, independent sample.

That doesn’t mean that asking a lot of people and taking the average guess or looking for the most popular answer in a crowd will work for all types of questions.

Crowdsourcing the weight of an ox works because we are all human beings who have lots of experience with what things weigh.

On the other hand, if you ask about something which most people have no basis for estimating, like the GDP of Johor Bahru, the average of a bunch of guesses isn’t likely to be helpful.

In other words, it’s okay if they’re guessing, but they shouldn’t be randomly guessing.

If the people in that sample have a basis to guess or an above-random chance of answering correctly, the combination of all these guesses can rival or surpass an expert. If not, you’re better off asking a single expert for the answer.

If the circumstances are right, a large, independent sample can help you find the signal in the noise.


BELIEVE RESPONSIBLY

William Kingdon Clifford and the universal moral responsibility for calibrated beliefs

Francisco Mejia Uribe wrote a great article in Aeon about a nearly-forgotten Victorian philosopher, William Kingdon Clifford, whose work is remarkably relevant to the modern day challenges created by the ease with which technology allows for the spreading of information.

Clifford was a polymath who made valuable contributions in both mathematics and philosophy before his death in 1879 at age 33. His Ethics of Belief includes this seemingly outlandish summary of reasoning:

“To sum up: it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.”

Just so we’re clear, when Clifford says wrong, he doesn’t mean “worse than the alternative.” He means “one long sin against mankind.”

And when he says anyone and everywhere, he takes pains to point out that includes “every rustic who delivers in the village alehouse his slow, infrequent sentences”.

Lo and behold, as Uribe summarizes Clifford’s three arguments, this extreme 150-year-old pronouncement has startling relevance to fake news and the spread of information on social media.

Uribe explains why, especially in today’s world, “we have a moral obligation to believe responsibly, that is, to believe only what we have sufficient evidence for, and what we have diligently investigated.”

First, our beliefs influence our actions and any belief or action can have consequences for others. 

If you believe that vaccines cause autism, you won’t vaccinate your child. That has consequences not just for the health of your own child but for herd immunity.

Time has caught up to Clifford here. Uribe points out that we live “in a world in which just about everyone’s beliefs are instantly shareable, at minimal cost, to a global audience”.

Anyone’s false belief has the potential to go viral, become fake news, or move the needle on the beliefs of others. And those beliefs can result in actions.

Like going into a pizza shop with a gun.

Second, poor practices in belief-formation spread. 

We become careless and credulous. Anyone taking their cue from us – our children, peers, anyone we influence – will more likely be careless and credulous.

We’re not in a position to call out others, either in helping learn from their errors or discouraging them from intentional falsehoods, when we ourselves form beliefs carelessly.

In short, we collectively lower the bar.

Third, we have a moral responsibility not to contribute to degrading our collective knowledge.

Again, the literal truth behind that responsibility is a fact of modern life. As I reported in the newsletter several times earlier this year, the latest research shows that fake news items spread faster, further, and with a greater impression than true items.

Read the Aeon piece, and Clifford’s original essay. As Uribe concludes, “If there was ever a time when critical thinking was a moral imperative, and credulity a calamitous sin, it is now.”


“THEY RUINED POPCORN!”

How much is information worth?
It depends on the person…

There are two sources of uncertainty in decision-making: hidden information and luck. By definition, we can’t control the luck element.

The part we do have some control over is hidden information. Information relevant to a decision has value to us because it will narrow our uncertainty, allowing us to calibrate our beliefs and predictions about the future.

Although information can have value, not all information does have value. A piece of information might be of value to one person, have no value to another, or even be of negative value to another.

Cass Sunstein addresses just this issue of the value of information in an upcoming paper in Journal of Risk and Uncertainty, “The Welfare Effects of Information.”

We generally get that information isn’t universally positive or without cost. For one thing, gathering information takes time, which is a cost. If we were to insist on having complete information before making a decision, we would never be able to decide about anything.

Beyond the time cost, we tend to assume information – reliable information, at least – is always good and more is better. If there were no cost to the information, we would want as much of it as we could get, right?

Not so fast!

Sunstein starts with the reminder that not all information is equal.

“Some information is beneficial; it makes people’s lives go better. Some information is harmful; it makes people’s lives go worse. Some information has no welfare effects at all; people neither gain nor lose from it.”

Further complicating matters, information that is beneficial to one person might have negative value to another person.

Do you want to know if your partner or spouse has ever cheated?

Do you want to know the exact day you are going to die?

Do you want to know the number of calories in that large popcorn you’re about to order at the movies?

For all these types of questions, some people would want to know and some wouldn’t.

For example, most of us know that, nutritionally, movie-theater popcorn is crap.

For some of us, we want to know how many calories are in the popcorn. Maybe it helps us to resist temptation and that makes us feel good.

For others of us, that information just makes us sad. All we want is to enjoy some freakin’ popcorn and the caloric information makes us feel weak for ordering it.

As Sunstein notes, “The point was captured in a reaction of one government official to mandatory calorie labels: ‘They ruined popcorn!'”

One way to figure out how much information is worth to someone is to ask them how much they’d be willing to pay for it.

Sunstein conducted a pair of surveys asking people whether and what they’d pay for various kinds of information.

For example, just 57% would want to know if their spouse has ever cheated. The median price they’d be willing to pay for the information was $74.50. The mean price was $120.67.

The gap between the median price and the mean price, along with the fact that less than 60% of people even want to know, shows that the value of information is highly individual.

It’s good for us to consider the value of information in our decision-making but we have to remember that our mileage may vary.

As Sunstein points out, “When consumers state their willingness to pay, they are solving a prediction problem.” And the prediction they are making about the value of goods or information is particular to them and no one else.

Across the population, people don’t put the same value on the same information. That’s obviously because people value different things.

In addition, for any individual, the value of information doesn’t stay constant across situations or time. A twenty-year-old will likely value nutritional information differently than that same person at forty. A person who normally wants to know the calories in that big slice of chocolate cake might not want that information on their birthday.

Making matters worse, we are often bad predictors of what will make us happy.

How much we think we will like something or how useful we will find it at the time we make a decision is often very different from our actual experience of it. We can think we are really going to love a restaurant but then the meal is bad and the place is noisy.

As Kahneman has discussed and Sunstein reminds us, there is a difference between predicted utility and experienced utility.

That difference between what we predict and what we actually experience reveals a fundamental paradox in how good a proxy willingness-to-pay can be: How can we put a value on information we don’t have and haven’t seen?

How reliable could such an uninformed estimate be?

The value of information turns out to be a hidden-information problem.


HOW MUCH, THEN, IS IGNORANCE WORTH?

How and why we’ll PAY to avoid partisan information
A fool’s paradise is, after all, still “paradise”

Cass Sunstein explained how “information avoidance” includes situations in which people “might be willing to pay not to receive information.”

It might initially seem hard to imagine paying to avoid hearing potentially relevant information.

Turns out we’re pretty familiar with a situation where that’s the case.

Right around the same time I read Sunstein’s paper, I came across a study which showed that we’ll pay to avoid hearing information on opposing views on many partisan issues.  (H/t Koenfucius for his tweet.)

In five experiments conducted by Jeremy Frimer and colleagues, a majority of liberals and conservatives gave up a chance to win money to avoid hearing opposing opinions on same-sex marriage, an upcoming election, abortion, gun control, and legalization of marijuana.

“Their lack of interest was not due to already being informed about the other side or attributable election fatigue. Rather, people on both sides indicated that they anticipated that hearing from the other side would induce cognitive dissonance (e.g., require effort, cause frustration) and undermine a sense of shared reality with the person expressing disparate views”.

How are we supposed to reduce polarization if our lives are so negatively impacted by listening to somebody else’s opinion on same-sex marriage or gun control? What are the consequences of a preference for willfully remaining blind to potentially relevant information?

Obviously, people have the right to make those kinds of choices. Still, John Stuart Mill is probably turning over in his grave on that one.

Sorry Mr. Mill. It’s not just that we’re not talking to each other across the aisle. We’re WILLING TO PAY to avoid such conversations.


THIS WEEK’S VISUAL ILLUSION

An illusion floor

Via Akiyoshi Kitaoka