Subscribe to Annie's Substack

“An Invisible Skill Mastered by Champion Tennis Players” — Annie’s Newsletter, August 10, 2018

It’s Game Theory!

How good are the world’s best tennis players at executing a game-theory-optimal strategy?

Turns out, pretty darn good!

I recently read a great thread on Twitter by economist Lionel Page in which he demonstrates how top tennis players, in addition to being amazing athletes, are executing game-theory-optimal strategies. Citing research results – his own and others’ – he describes numerous examples of optimal tennis decision-making:

  • Variation of serve placement (especially in men’s tennis) is stunningly close to optimal.

  • Risk-taking on first and second serves–balancing the likelihood of getting in the second serve with giving an opponent more of an advantage with a slower, less risky serve.

  • How players allocate their effort over the course of a match–expending more effort on points that really matter.

While each of these examples is worth a deeper look (and I recommend you explore the thread to do so), I found this last example especially interesting when looking at the research Page cited.

This optimizing of allocation of effort explains why top seeds struggle against lesser-skilled opponents in the early rounds of major tournaments more than you would expect if you just looked at the skill match-up.

Because top seeds are more likely to make it deep into the draw compared to their first- and second-round opponents, it is positive expected-value for them to exert less effort to save their energy for those later, tougher matches.

As Page explained, “It is optimal for them to save energy for later rounds. The cost is a higher risk of upsets in early rounds.”

When we watch a Grand Slam event, we focus on the more obvious aspects of the game — the execution of unbelievable athletic skills – power, reflexes, coordination, endurance, finesse, speed, etc.

The element that is harder to see, but just as beautifully executed, is decision strategy. 

Especially with all the physical skill on display, it’s easy to overlook that the most talented players, as Page concludes, “are excellent decision makers who have learned to approximate optimal/equilibrium behaviour in many dimensions of the game.”

It’s a wonderful analysis, contained in just 10 tweets. Definitely worth reading.

A positive example of activism by a fourth grader? 
Or an example of how haphazard belief formation can sway public policy?
(Or both?)

First, forced me to confront that astronaut ice cream wasn’t a real thing from the space program. (See the July 16 newsletter for the scoop!)

Now we’re finding out that some of the data informing the plastic-straw environmental crisis wasn’t sufficiently vetted.

It’s hard to believe but it turns out that the statistic at the center of the debate came from undocumented and unchecked research by a nine-year-old.

This is described in a recent New York Times article, “How a 9-Year-Old Boy’s Statistic Shaped a Debate on Straws” by Niraj Chokshi.

Milo Cress (now seventeen) started a personal environmental conservation campaign in 2011 when he was nine.

He couldn’t find comprehensive statistics on how many straws Americans use a day, even after calling some straw manufacturers. Somehow, he came up with his own estimate of 500 million straws a day.

He didn’t document where he got the number, and no one adopting his cause thought to ask or question it.

In fact, the opposite happened: “500 million straws a day!” became a rallying cry for a movement to ban straws.

(The actual number is between 160 million and 390 million straws a day).

To be clear, this isn’t Cress’s fault.

He showed an interest in the environment and handled the statistics as you might expect a nine-year-old to handle them. It’s the job of all participants in a decision process (particularly the adults!) to evaluate the information.

And therein lies the problem.

For people concerned about the role of plastic in environmental waste, that astronomical number fit their narrative. And we know that when information fits a narrative, it is less likely to be rigorously vetted and more likely to shared.

This is a stunning example of the haphazard belief formation process – compounded by the ease with which those beliefs spread when they confirm our point of view.

As I mentioned in the March 16 newsletter, Science reported the results of a study by Soroush Vosoughi and colleagues of how information spreads on Twitter. The results showed that novel, provocative information – including false information, because its uniqueness piques interest – spreads faster.

In addition, if a statistic comes without context, it should be up to the participants in the decision process to frame it.

How many straws per day is too many? The numbers are meaningless if there’s not a threshold number of straws we should be using, or a scale for measuring the seriousness of the problem.

Overall, it’s a good thing that kids are concerned about the environment. They could be involved with straws in worse ways.

Additional places for finding the people and ideas from this article:
Deb Roy: @DKRoyfaculty page
@SinanAralfaculty page

Are we “blind”? Or “focused”? Is that good or bad?

A recent article on by Teppo Felin offers a reinterpretation of one of the most famous and influential findings in social science: the 1999 invisible-gorilla experiment, conducted by Daniel Simons and Christopher Chabris.

The experiment, in case you’re unfamiliar:

  • Subjects are instructed to watch a short video and count the number of basketball passes by players in white shirts. (It takes just a minute to watch the video and take the test.)
  • In the video, players in black shirts are also passing a basketball. Then, a person wearing a gorilla suit walks among, and past, the players.
  • Seventy percent of the subjects never see the gorilla.

As Felin points out (quoting Daniel Kahneman’s interpretation of the experiment), a central tenet of behavioral economics is that “humans are ‘blind to the obvious, and that we are also blind in our blindness.'”

We naturally think that being oblivious to the gorilla is a flaw.

Felin’s position, developed in detail in the article, is that missing the gorilla isn’t blindness at all but a feature of human information processing.

If you watched the video and weren’t counting the passes, you’d probably see the gorilla.

But would you notice the letter “S” painted on the wall? Or the elevators? Or would you be able to recall the number of players (in total or by gender or attire)? And, by the way, you probably would not know the number of passes by the white team.

The results of the experiment may be illustrating focus.

“The alternative interpretation says that what people are looking for – rather than what people are merely looking at – determines what is obvious. Obviousness is not self-evident. Or as Sherlock Holmes said: ‘There is nothing more deceptive than an obvious fact.’ This isn’t an argument against facts or for ‘alternative facts’, or anything of the sort. It’s an argument about what qualifies as obvious, why and how. See, obviousness depends on what is deemed to be relevant for a particular question or task at hand. Rather than passively accounting for or recording everything directly in front of us, humans – and other organisms for that matter – instead actively look forthings.”

Directed observation and selective attention could just as easily be viewed through a positive frame.

“Knowing what to observe, what might be relevant and what data to gather in the first place is not a computational task – it’s a human one. The present AI orthodoxy neglects the question- and theory-driven nature of observation and perception. The scientific method illustrates this well. And so does the history of science. After all, many of the most significant scientific discoveries resulted not from reams of data or large amounts of computational power, but from a question or theory.”

When viewed through this frame, it brings into doubt a seemingly compelling argument in favor of AI over humans: AI wouldn’t miss the gorilla.

But that may actually be an argument FOR humans. An article in by Russell Brandom on self-driving cars includes observations by Gary Marcus likewise suggesting that, for all AI’s power, it still has weaknesses in direction, focus, judgment, generalization, and novelty:

“But deep learning requires massive amounts of training data to work properly, incorporating nearly every scenario the algorithm will encounter. Systems like Google Images, for instance, are great at recognizing animals as long as they have training data to show them what each animal looks like. Marcus describes this kind of task as “interpolation,” taking a survey of all the images labeled “ocelot” and deciding whether the new picture belongs in the group.

Engineers can get creative in where the data comes from and how it’s structured, but it places a hard limit on how far a given algorithm can reach. The same algorithm can’t recognize an ocelot unless it’s seen thousands of pictures of an ocelot — even if it’s seen pictures of housecats and jaguars, and knows ocelots are somewhere in between. That process, called “generalization,” requires a different set of skills.”

The Aeon article (as well as the Verge article) offers valuable contrarian viewpoints.

Some of the things we consider as cognitive flaws may actually be virtues – virtues we are far from being able to program into machines.

Additional places for finding the people and ideas from this article:
The Invisible Gorilla on

An example of the hidden role of luck in what we observe in the world

Several years ago, Don Cheadle described to the Hollywood Reporter the five stages of an actor’s career:

  1. “Who’s Don Cheadle?”
  2. “Get me Don Cheadle.”
  3. “Get me a Don Cheadle type.”
  4. “Get me a young Don Cheadle.”
  5. “Who the hell is Don Cheadle?”

This observation has been around almost as long as Hollywood, a commentary on the ebbs and flows of a creative career.

We all want to be in the “Get me Don Cheadle” phase. Whether we get there (and how long we stay) involves a lot of skill, but also depends on a lot of luck as well – and we tend overlook the luck element.

A recent study reported in sheds some light on the role of luck in our lives, and how hot streaks can play an outsized role in our assessments of someone’s professional merit.

Specifically, Lu Liu and colleagues looked at hot streaks (“a specific period during which an individual’s performance is substantially better than his or her typical performance”) in three creative careers: Artists, film directors, and scientists.

They concluded that, in all three fields, success was concentrated in hot streaks.

The interesting part is that the hot streaks were not associated with a change in productivity, but gave their work a substantial increase in impact.

“The hot streak emerges randomly within an individual’s sequence of works, is temporally localized, and is not associated with any detectable change in productivity. We show that, because works produced during hot streaks garner substantially more impact, the uncovered hot streaks fundamentally drive the collective impact of an individual, and ignoring this leads us to systematically overestimate or underestimate the future impact of a career.”

This fits with numerous items I’ve brought up in the newsletters and tweets, how success has an element of chance we minimize or ignore:

The Matthew effect in early-career success in getting grants – A study earlier this year by Thijs Bol and colleagues found that early-career grant winners just above the funding threshold do better than those just below, and that gap grows throughout their careers.

The combination of luck and skill in a lifetime of outcomes – In a forty-year simulation of life outcomes described in an Amie Gordon article in Psychology Today, an early lucky break makes a big difference in overall outcomes.

In a purely-random game, the distribution of outcomes can be mistaken for skill – described a simulation of the give-a-dollar-take-a-dollar game (45 people start with $45, and each randomly gives $1 to someone else, once per minute, for 5,000 minutes). There’s zero skill in the game, but the richest player ended up with 10% of all the money and the top three players ended up with 25%.

Additional places for finding the people and ideas from this article:
Bol’s colleagues in the study were Mathijs de Vaan and Arnout van de Rijt. It was published in the Proceedings of the National Academy of Sciences (@PNASNews)
Amie Gordon: Psychology Today page

Another excellent example of the significant influence of luck in our lives 

Nick Maggiulli’s blog,, regularly contains insights on application of behavioral-science concepts to investing and the operation of financial markets.

In a recent blog, “The Right Place, The Right Time,” he points to several good examples of the importance of clearly-outside-our-control timing.

One that struck me was acknowledging how you can have entire “lucky decades” in investing – an area in which countless professionals devote their lives to developing their skill.

It’s no secret that the markets fluctuate but he points out that the decade in which you start investing gives you a giant (permanent) advantage or disadvantage:

As Nick points out, “For example, if you had invested from 1960-1980 and beaten the market by 5% each year, you would have made less money than if you had invested from 1980-2000 and underperformed the market by 5% a year.”

(Full disclosure: Maggiulli’s application of this nicely incorporates several concepts from Thinking in Bets.)

Expanding flowers, from Akiyoshi Kitaoka