Subscribe to Annie's Substack

“The Most Important Thing To Know”— Annie’s Newsletter, December 7, 2018

THE MOST IMPORTANT THING TO KNOW? - Jim O’Shaughnessy: “Some things I don’t know” IDENTIFYING (AND DEVELOPING?) INTUITION – from a Daniel Kahneman speech THE ELECTRONIC FLU-SHOT NUDGE – from Penn Medicine’s Nudge Unit HUMANS VS. MACHINES. AGAIN. AND THIS TIME, IT’S NOT SERIOUS – Algorithms outperform humans at what predicting what we think will be funny

THE MOST IMPORTANT THING TO KNOW?

Jim O’Shaughnessy gets it: “Some things I don’t know”

Back in May, Jim O’Shaughnessy shared on Twitter some of the things he learned during thirty years as a professional investor: “some things I think I know and some things I know I don’t know.”

O’Shaughnessy, in under 1,000 words, offers a remarkable manifesto about investing, probabilistic thinking, and humility. (You can also see the thread as a single document.)

There’s so much to learn from his approach. These are just a few of my takeaways.

1. We can’t know what will happen next.

Expertise is worth a lot. Expertise gets you a better ability to narrow down the range of possible futures.

What it doesn’t get you is the ability to know exactly what will happen next, whether you’re investing in financial markets or ordering in a restaurant:

“2/I don’t know how the market will perform this year. I don’t know how the market will perform next year. I don’t know if stocks will be higher or lower in five years. Indeed, even though the probabilities favor a positive outcome, I don’t know if stocks will be higher in 10 yrs.”

Even if you know for sure that a coin will flip heads 50% of the time and tails 50% of the time, that doesn’t mean you know what it will flip on the next try. If you don’t understand this, you can’t be a good decision maker.

That leads to this insight from O’Shaughnessy:

“17/I know that I can not tell you which individual stocks I’m buying today will be responsible for my portfolio’s overall performance. I also know that trying to guess which ones will be the best performers almost always results in guessing the wrong way.”

2. Why humility is important.

Recognizing that we don’t and can’t know everything is the first step in creating strategies dealing with (inevitable) uncertainty. We can’t make good decisions in that uncertainty if we pretend it doesn’t exist.

“22/I know I don’t know exactly how much of my success is due to luck and how much is due to skill. I do know that luck definitely played, and will continue to play, a fairly substantial role.”

3. What a strategy needs to succeed in an uncertain system.

Any strategy we come up with must be robust in the long run because we can’t predict what’s going happen on the next trial. We have to be right in the long-term, and we also need to survive the likelihood we’ll be wrong along the way.

Just knowing about our biases doesn’t make them go away so any strategy must constrain our ability to make biased, emotional decisions.

“18/I know that as a systematic, rules-based quantitative investor, I can negate my entire track record by just once emotionally overriding my investment models, as many sadly did during the financial crisis.

19/I think I know that no matter how many times you ‘prove’ that we are saddled with a host of behavioral biases that make successful long-term investing an odds-against bet, many people will say they understand but continue to exhibit the biases.”

I recommend you read the whole thread. It is a lot of wisdom packed into a small package.


IDENTIFYING (AND DEVELOPING?) INTUITION

From a recent speech by Daniel Kahneman

At a mid-November speech at the World Business Forum, Daniel Kahneman discussed the role of intuition in investing, laying down conditions for trusting intuition. (Emily Zulz reported about the speech on ThinkAdvisor, “Daniel Kahneman: Your Intuition Is Wrong, Unless These 3 Conditions Are Met.”)

Kahneman identified three necessary conditions for developing high fidelity intuitions (quoting from Zulz’s article):

1. “Regularity in the world that someone can pick up and learn.” (Kahneman gave examples of chess players analyzing a chess position or married couples picking up on their partner’s mood in an instant on the phone.)

2. “’A lot of practice.’

3. “Immediate feedback.” (Kahneman: “You have to know almost immediately whether you got it right or got it wrong.”)

One of the problems that we have as decision makers is that we rely on our intuition too much.

We think our intuition is better than it actually is. We don’t hold it accountable to the light of rational explanation as much as we should.

The culprit, I think, is noise, which limits our ability to learn from experience.

1. What does it mean to be “right” or “wrong”?

This is an aspect of poker (regarding its potential as a learning environment) that I described in the introduction to Thinking in Bets:

“The result of each hand provides immediate feedback on how your decisions are faring. But it’s a tricky kind of feedback because winning and losing are only loose signals of decision quality. You can win lucky hands and lose unlucky ones. Consequently, it’s hard to leverage all that feedback for learning.”

Philosophically, what does it mean “whether you got it right or got it wrong”? It can’t just mean winning or losing, or whether you confirm what you previously believed. It can’t just mean a good outcome or bad outcome.

It must mean a good decision or an accurate belief. But in a noisy world, those things are opaque to us.

If there isn’t a high correlation between decision quality and outcome quality, it’s hard to figure out how we would know whether we got it right or wrong from experience alone, at least in the short term.

2. “Regularity” and “practice” are incredibly specific.

You need a lot of tries at something to develop intuition, but they need to be your own practice. This isn’t about accessing data or absorbing the analysis of others.

For most kinds of decisions, you don’t get thousands of tries.

You get lots of practice at activities like driving. Not so much at picking a spouse. And there are a lot of picking-a-spouse-type decisions.

3. A lot of feedback isn’t sufficiently “immediate.”

Even when you don’t have to filter noise from the feedback, feedback can take a long time. You can smoke a cigarette and feel fine. You can smoke a pack and feel fine.

Maybe you’ll even feel smoking helps with stress, boredom, or appetite control. You get all that feedback right away.

But if you smoke a pack a day for thirty years, you’ll get emphysema.

That feedback comes too late for you to do anything about it.

If the feedback isn’t immediate, what good is it in developing intuition?

That of course brings us back to the difficulty of knowing if we are right or wrong. We can’t know in the short run. And the long run is too late for intuition.

Each of the three conditions listed by Kahneman is necessary for developing reliable intuition, but not sufficient.

All are bottlenecks. And all three conditions rarely come together.

How often are you in a situation where you get lots of practice at decisions where outcomes are predictable and come fast?

Tennis? Driving? I mean, it’s not never. But…

This means that we should be intuition skeptics.

We shouldn’t accept (from ourselves or others) the explanation that, “my gut told me so,” without further examination.

And one of the best ways to examine intuition is to try to teach someone how you got to your insight such that they could understand it and, ideally, repeat it.

If you hear someone tell you “my gut says so,” make them act like they are on the sub-Reddit, ELI5 (“explain like I’m five”).

That will reveal a lot of kinks in the armor.

H/t James Clear, whose tweet led me to the report on the speech (and to Morgan Housel, who led Clear to it!). And any discussion on intuition, especially involving what Daniel Kahneman has to say about it, should include Thinking, Fast and Slow.


THE ELECTRONIC FLU-SHOT NUDGE

Penn Medicine has a Nudge Unit, and this is the kind of thing they do

Penn Medical has a Nudge Unit, and it’s doing some interesting things.

A nudge, popularized in Richard Thaler’s and Cass Sunstein’sfamous book, Nudge: Improving Decisions About Health, Wealth, and Happiness, is defined by the authors as follows:

“Any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives.”

Although the concept of nudges is increasingly entering public policy discussions – What are good ways to encourage people to do things for their own and/or the common good? – the Nudge Unit may be the first behavioral design team embedded in a health system.

The Unit’s mission is to apply “insights from behavioral economics and psychology to design and test approaches to steer medical decision-making toward higher value and improved patient outcomes.”

A recent University of Pennsylvania study found that the rate of patients getting flu shots falls at the end of the day. (I found out about the study from an article on Philly.com, “Doctor didn’t offer a flu shot? It might depend on time of day,” by Tom Avril.)

A simple electronic nudge increased flu-shot rates by 20%.

It’s important that this nudge was active. It wasn’t just a reminder about offering flu shots (to the medical staff) or about getting one (to the patient).

When a medical professional recorded patient information like blood pressure and vital signs, they received an automatic, computerized notice to create a vaccination order. The professional had to click “accept” or “decline” to continue recording information for the examination.

We can think of this kind of nudge as a time-travel exercise. If we recognize the times at which we aren’t following through on our goals (I assume medical professional agree encouraging people to get flu shots is a good goal), we can set up systems in advance that help us reach our goals when our in-the-moment follow-through is otherwise lacking.

That’s a great use of a nudge.


HUMANS VS. MACHINES. AGAIN. AND THIS TIME, IT’S NOT SERIOUS:

Algorithms outperform humans at what predicting what we think will be funny

When we feel like we understand something (data, a process, an outcome), we like it better and trust it more.

If we merely think something is transparent, we’re more likely to believe its validity.

When algorithms offer the potential to assist in decision-making, we face the recurring humans-vs.-machines questions. First, is the algorithm superior to a human?

And second, is it really superior? By that, I mean whether we’re willing to trust an inscrutable black-box/machine/algorithm/robot/gizmo over our familiar fellow humans.

Jon Kleinberg and colleagues studied how algorithms performed at recommending jokes and how people reacted to the recommendations.

Let’s just say, “you humans are a tough crowd” (insert Rodney-Dangerfield-style necktie yank).

The studies they conducted actually showed that algorithms did a better job at recommending jokes to people than their friends and family members did.

But people didn’t like that.

Not that friends-and-family set the bar particularly high, evidenced by daily emails I receive with the subject line, “Fwd: fwd: fwd: fwd: fwd: FUNNY JOKE!!!!!!! What’s the difference between a ….”

The algorithm had killer material but people preferred jokes recommended by other people:

“Participants overwhelmingly thought that other people would do a better job of recommending jokes than the database – 69% of participants thought another person would be more accurate than an algorithm, and 74% preferred to receive recommendations from people instead of the algorithm.”

Kleinberg and colleagues found something similar to what I’ve suggested about media coverage and attitudes about self-driving cars: “Recommender systems were judged more harshly.”

They attributed this to the opaque nature of the algorithm: “These results show that people are less willing to accept recommender systems whose process they cannot understand.”

When we think a human is behind the process – one of us – we respond better. “An algorithm? Even if picks better jokes than Uncle Maury, how can a black box know what’s funny?”

To assess whether the opacity of the algorithm was behind the dislike for the recommendations, the researchers tested whether explaining how the algorithm worked, making it more transparent, would increase how much the subjects liked the recommendations.

One group got a “sparse” explanation: “We are going to feed your ratings into a computer algorithm, which will recommend some other jokes you might also like.”

A second group got a “rich” explanation of the algorithm: A poll of “thousands of people” rating different jokes and “which jokes appeal to people with a certain sense of humor,” searching for jokes “similar to the ones you liked.”

Not surprisingly, participants liked the algorithm’s recommendations more following the rich explanation.

Once you make the algorithm more transparent, especially here where it is connected to what other people like you find funny, we’re more likely to accept it.

H/t Cass Sunstein for bringing the paper to my attention.


THIS WEEK’S VISUAL ILLUSION

Michael Bach’s Jastrow Illusion

These two pieces from a toy railroad-track set are the same size:

On Bach’s Jastrow Illusion page, you can learn the history and principles behind it, and confirm visually that the pieces are identical.