Subscribe to Annie's Substack

“Henry Kissinger Warns That A.I. Could End Human Enlightenment” – Annie’s Newsletter, May 25th, 2018


What’s the likelihood he’s correct?

Henry Kissinger recently wrote a thoughtful piece about artificial intelligence in @TheAtlantic. The article offers a dark view, suggesting that we are completely unprepared for the impact AI will have on humanity.

Given my recent obsession with the way self-driving cars are being covered in the press and how perception of the safety of these vehicles is affecting policy decisions, I found this passage to be of particular interest:

Is this really a valid “indictment” of machine-reasoning versus human-reasoning? Many of the pieces I’ve read that are skeptical of the safety of self-driving cars fail to consider that safety assessment in comparison to the safety of our current technology: human-driven cars. (Rather than repeat my reasoning, you can review it in the April 27 and May 11 editions of the newsletter, and in an article I wrote for

In Kissinger’s hypothetical, isn’t it true that if an autonomous vehicle must choose between killing a grandparent and killing a child, that a human driver would be faced with the exact same decision in that situation? Whether it is an AI-driven vehicle or driven by a human, one of the two will die.

So why does it feel worse when the autonomous vehicle kills the grandparent or the child than when a human commits the same act? Isn’t Kissinger saying that it’s worse when the driverless car does it?

Why is it so scary to us that a truthful answer for the machine would be that it followed mathematical and not human principles?

Kissinger reasons that because you can understand the human’s choice, somehow that’s better.

(Never mind that believing we really understand why humans make the choices that they do is illusory.)

It seems easy enough to make the opposite assessment, that we should prefer a vehicle guided by mathematical principles rather than human choice. I mean, wouldn’t you rather drive over a bridge built on mathematical principles than the world’s greatest architect’s non-math-aided judgment?

I think the issue may be that a human driver has the option of saying, “I didn’t mean to do it. It was out of my hands.” And we accept that as an answer. In fact, we may even feel empathy for the driver for being forced into such a choice.

It feels better to us that we can view the human’s action as unintentional.

The autonomous vehicle doesn’t get that benefit of the doubt, even when faced with the exact same choice that will result in a death.

A final point: the self-driving car likely has a third option available that a human driver would be unlikely to execute in the moment: to save both the grandparent and the child and kill the person in the vehicle instead.

Imagine the car can swerve right and hit a kid, or swerve left and hit grandpa … or go straight and ram into the tree. In some instances, the rational choice might be to ram into the tree, but the mathematical logic makes it a more likely choice for an autonomous vehicle than for a human with self-preservation instincts at work.

Scary, but interesting.

A hair salon?

Just as Kissinger offered an apocalyptic view of AI, Google caused an optimistic stir with its demonstration of Google Duplex, a cheery AI assistant that can call and make appointments.

Leave it to AI skeptic @GaryMarcus and Ernest Davis to offer a counterpoint to the hype in a piece called, “Why A.I. Is Harder Than You Think,” in @NYTOpinion.

If you’re gloomy about Kissinger’s vision, this competing information about the future suggests that it may be quite a long time before our AI overlords make us as relevant as the Incas are today.

The bottom line: We may want to hold off on declaring the triumph of the machines, regardless of whether your view of that triumph is utopian or dystopian.

Watch the video. In it you’ll get to listen to Google Duplex making a phone call and scheduling a hair salon appointment. The human at the other end apparently didn’t suspect she was talking with a computer, and everybody went nuts.

Marcus and Davis argue that this limited capability isn’t Step #1 in the march of machine dominance. It’s actually just the best we can do right now.

Google Duplex can sound human in a conversation only after deep training in “closed domains,” where the language program is limited to certain subjects and the choices are tightly constrained. This is because AI has yet to tackle the infinite complexities of language.

As Marcus and Davis point out, “…mastering a Berlitz phrase book doesn’t make you a fluent speaker of a foreign language. Sooner or later the non sequiturs start flowing.”

Marcus and Davis paint as bleak a view of AI as Kissinger, but for different reasons. Kissinger imagines the rise of the machines to usurp the human race. Marcus and Davis declare that the machines are very far from even getting beyond making dinner reservations:

After the article appeared, I tweeted it and a discussion broke out (which included @GaryMarcus) in my replies. I found it super interesting, so I am sharing it here.

Curating a combination of similar items from good friends, good correspondents, and good analyses

On May 18, @Gautam__Baid, a portfolio manager in global equities, wrote me a kind tweet after reading Thinking in Bets:

I appreciate his nice words and especially thank him reminding me of the great Charlie Munger quote, “Life is a whole series of opportunity costs.”

Every choice we make (even the choice to stay the course) means that we are foregoing all other choices. Each decision costs us the opportunity to make any other decision. We often don’t view sticking with the status quo as “a decision,” so we are likely to overlook the opportunity costs associated with not making a change.

In quick order, I got two more reminders on related aspects of opportunity costs. First, I read “Judge the value of what you have by what you had to give up to get it” by @TimHarford.

The piece is a good reminder of how we tend to neglect opportunity costs in decision making.

Every decision is a bet on particular set of futures. When we make a decision, we are betting that the future that results from the decision we make will, on average, be better than the future that results from any other decision we could have made (taking into account limited resources). That can involve anything from whether we stay at our job vs. moving to a different city for a different job, or whether or we order the chicken or the fish at a restaurant.

Have you ever ordered the chicken and regretted the choice when the chicken was dry, wishing you had ordered the fish instead? Haven’t we all?

That is opportunity cost.

When we think about decisions this way, the importance of examining what you might be giving up when you decide becomes clearer.

At almost the same time, a friend sent me an article from @FastCompany by @Vivian_Giang. We had been talking about risk-aversion and how it causes people to over-value the status quo and under-value what they may be giving up by staying the course.

The article, apropos the conversation we were having, offers a pretty extreme position in favor of  NOT staying the course, by job jumping: “You Should Plan On Switching Jobs Every Three Years For The Rest Of Your Life.”

The article goes against conventional wisdom and says you should be driven by opportunity costs to change jobs frequently.

Many good things come out of job jumping that you wouldn’t necessarily think about:

  • Better overall pay;
  • A higher learning curve;
  • Greater motivation to make a good impression in a short time;
  • More diverse skills; and
  • (Ironically,) more career stability (because you develop job-finding skills and aren’t beholden to a particular employer for your livelihood).

When you’re deciding whether to stay in a job, treat it like a new decision. Ask yourself, “What am I giving up by sticking with the job I have?

Shane Snow tackles reconciling dissent and differences with unity and harmony

@ShaneSnow, founder of Contently and author of Dream Teams: Working Together Without Falling Apart (coming out June 5), recently appeared on the @MentorBoxOnline podcast with Tyler Lay.

The discussion centered on the paradoxes and pitfalls of teams in the workplace(and elsewhere).

In the first part of the interview, Snow sets the foundation for the inherent difficulty in balancing the benefits of unity and commonality with the benefits of diversity and dissent. A group can be smarter than its smartest member, but “work culture” often means putting diverse people together and squashing out their differences.

In addition, of course, to the pitfalls in innovation by having a homogenous workplace, the justice and moral problems have encouraged or required organizations to diversify by gender, race, and age. To make all these people “fit” in those organizations, we’re left with all of the discomfort and none of the benefits of diverse viewpoints.

He points to a pair of research findings I considered especially interesting. First, when you get to the board-of-directors level, the best boards are those with gender diversity. But when you look at the lower levels in companies, diverse companies are less productive than their homogeneous counterparts.

At the board level, members are encouraged to speak their minds and diverse opinions are valued.

At lower levels in the workplace, friction is created because the team-player dynamic is at odds with expressing disagreement. Dissenting viewpoints feel shut out, or their input creates conflict.

Getting viewpoint diversity in the group is only the first step toward making it greater than the sum of its parts. We also have to create an environment where it is safe to disagree.

And we can’t assume that will happen simply by getting different views in the same room.

In fact, exposure to diverse viewpoints can backfire when not paired with an environment of open-mindedness about sharing them.

For example, exposing voters to opposing-party views on Twitter does not moderate their political views and, in some instances, the voters’ views become more extreme.

Take a look at my recent newsletter items on the election studystudies on failed diversity training programs, and epistemic bubbles vs. echo chambers for more examples and detail about this backfire effect.

Productive teams don’t naturally happen. As Shane Snow points out, traditionally winning on a team has meant unity and finding common ground. When unity is defined as “being on the same page,” the goal of unity is at odds with viewpoint diversity.

When the team redefines what winning means by rewarding open-mindedness, expressing diverse views, admitting mistakes, and changing one’s mind, the team redefines what it means to be “part of the team.”

From @AkiyoshiKitaoka