GROUP DECISION DISASTER (SQUARED):
Slow process (rarely meeting) + slow transmission of information (majority-rule decisions)
@QuartzAtWork published a great piece last week, “How to design teams to improve almost any decision-making process” by Oliver Staley (@OStaley). The piece focuses on three elements that contribute to the effectiveness of group decision making: team size, team rules, and team pressure.
Here are my takeaways:
Team size:
Bigger is not necessarily better. It is true that, for some types of decisions, soliciting a lot of opinions is helpful, but at a certain point, the combination of a difficult decision and a lot of opinions backfires.
Team rules:
Majority rule, or defer to an expert.
Also, how often the group meets matters.
This is why crowdsourcing (taking an average of guesses) works well for estimating the weight of an ox at a county fair but not so much for guessing the population of Johor Bahru.
Team pressure:
Fear motivates on simple tasks but can be counterproductive on difficult tasks. If you want to get people to pound more nails in a board, pressure is a good motivator. But if you want to get people to solve complex math problems more quickly, pressure will work against you.
This all points us to the worst group-decision process: a group that meets infrequently, is tackling complex decisions, and the decisions are determined by majority rule.
HAPPY BIRTHDAY BERTRAND RUSSELL
A celebration of decision-making philosophy and a remarkable (and controversial) life
Bertrand Russell was born on May 18, 1872. This week on Twitter, I featured five of his most famous quotes.
- “Man is a credulous animal, and must believe something; in the absence of good grounds for belief, he will be satisfied with bad ones.” (From Unpopular Essays.)
- “Fear is the main source of superstition, and one of the main sources of cruelty. To conquer fear is the beginning of wisdom, in the pursuit of truth as an endeavour after a worthy manner of life.” (Also from Unpopular Essays.)
- “To teach how to live without certainty, and yet without being paralyzed by hesitation, is perhaps the chief thing that philosophy, in our age, can still do for those who study it.” (From A History of Western Philosophy.)
- “Mathematics, rightly viewed, possesses not only truth, but supreme beauty – a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show.” (Also from A History of Western Philosophy.)
- “The fact that an opinion has been widely held is no evidence whatever that it is not utterly absurd; indeed in view of the silliness of the majority of mankind, a widely spread belief is more likely to be foolish than sensible.” (From Marriage and Morals.)
WILL HUMANS FALL IN LOVE WITH SEX-BOTS IN THE FUTURE?
This is (mostly) about betting and predictions
Who would have thought that an article about the future of sex robots would offer a great example of scenario planning in action?
Especially if you’re looking for examples to stretch your understanding of premortems and backcasting, I think you’ll enjoy “Sex robots are coming. We might even fall in love with them” from @VoxDotCom, an interview with philosophy professor Lily Eva Frank by @SeanIlling.
In the discussion, Frank essentially assesses the proposition of human-robot love by thinking in bets, evaluating the possible futures through a backcast and premortem.
- The backcast: If we project a future of human-robot love, what had to have happened for true love between robots and humans to occur?
- The premortem: If we project a future where human-robot love doesn’t take hold, what would be reasons that prevented it?
She defines some aspects of romantic love, considers what the technology would have to be capable of to reproduce that, and mentions a few elements that are more likely than you’d think (the technology will get there; we have no trouble anthropomorphizing), and some that are possible sticking points.
The problem seems to be a paradox about programming “mutual love.” It’s not hard to believe that we’ll be able to program a robot to love us, but are WE capable of long-term love with something that loves us only because it has to?
To be fair, we could even get to a level of engineering where we could program the robot with an element of free will, but who wants to buy a sex robot that can dump you?
Figure that out, and your future in the sex-robot biz could be unlimited.
“BUT THE DATA….”
Three magic words signaling that we’re using data to shield us from responsibility for a bad outcome
I recently had a conversation with the fabulous Michelle Dennedy (@MDennedy), Cisco’s Chief Privacy Officer, on her podcast #PrivacySigmaRiders, from @CiscoTrust. You can listen to the podcast here or read the transcript.
Michelle pushed me to apply concepts from my book to new areas.
Her focus is on data, an issue of obvious relevance to decision-making, but not one laid out in any detail in Thinking in Bets:
So here is the question that we worked together to answer:
How can a data-driven approach to decisions – generally considered helpful in making rational, informative decisions – become another way to amplify bias, a shield that helps us to deflect blame for bad outcomes, more than a decision tool?
I hope you’ll take a listen to the podcast or read pages 7-9 of the transcript. I’m just starting to develop my thoughts on how we use data as a shield to protect us from resulting. I’d love to hear readers’ thoughts and experiences on the topic – so, please send me a tweet.
In this era of more data, more power, advancing analytics, and more possibilities for data-driven decision making, how we use data will correspondingly become a bigger and more crucial conversation.
Data doesn’t exist in a vacuum. It is not objective. It takes humans to collect it, aggregate it, analyze it, and interpret it.
When are we using data to find the truth?
And when are we using it as a hack to support pre-existing beliefs and conclusions?
Based on our well-documented tendency toward resulting, when are we using data to deflect the blame for a bad outcome?
“But the data supported the decision. I just did what the data told me was correct” would be a valuable addition to any decision swear jar.
ANOTHER PODCAST TO RECOMMEND
Julia Galef’s Rationally Speaking
@JuliaGalef is a passionate voice for the cause of improving rationality in decision making. She’s co-founder of the Center for Applied Rationality. She’s been hosting the Rationally Speaking Podcast since 2010. She also runs the Update Project, helping decisionmakers develop more accurate models of the world.
So, you might guess that I had a fangirl moment when she asked me to be a guest on her podcast.
You can check out the interview, by listening or reading the transcript.
The conversation did not disappoint. In particular, she focused on the question of whether someone can simultaneously project confidence and express uncertainty, which led us into a wonderful exploration of the difference between people’s desire for simple and definitive explanations (which pundits tend to satisfy) and the ability to express confidence in uncertainty.
This is her summary of the conversation from Twitter:
Aside from checking out her podcast, I also highly recommend taking a listen to her as a guest on The Knowledge Project, a @FarnamStreet podcast. The episode is “The Art of Changing Minds.”
FESTINGER’S WHEN PROPHECY FAILS DOES NOT FAIL TO STAY RELEVANT
h/t Jay Van Bavel (and whoever recognizes the movie or episodic-series potential here)
In 1956, Leon Festinger, Henry Riecken, and Stanley Schachter published When Prophecy Fails. (If you read books on Kindle, I just noticed you can purchase it in that format for $1.12.)
It was a landmark psychological study about belief maintenance, group behavior, and cognitive dissonance, that came from researchers in these areas stumbling upon a doomsday cult, infiltrating it, and watching how members used failure of the predicted doomsday to occur (several times) to strengthen their belief that they were in close contact with powerful space aliens.
@JayVanBavel recently posted several tweets about the book, calling it “the most relevant psychology book for the modern political era”.
Its application is obvious if you think our modern politics and tribal instincts have combined to create echo chambers where each group pursues an increasingly-divergent “reality.”
Van Bavel included screenshots of the first two pages, which is enough to convince you (if you’re unfamiliar with the work), that you’re about to read something important. This includes defining what we now call cognitive dissonance and listing five conditions in which disconfirmation of a belief can lead us to increase our fervor in its truth.
- The belief is deeply-held and relevant to behavior.
- The belief holder has taken actions based on the belief that are difficult to undo.
- The belief is sufficiently specific that events, at some point, unequivocally refute the belief.
- The belief holder recognizes this undeniable disconfirming evidence.
- The belief holder has social support to continue in the belief despite the disconfirming evidence.
Van Bavel focused on the book’s political relevance. It is also one of the most influential books ever written on group psychology, belief maintenance, and cognitive dissonance.
Finally, it documents a darkly comic adventure that most researchers never experience:
THIS WEEK’S ILLUSION: THE IMPOSSIBLE TRIDENT
Also called the Devil’s Tuning Fork
From IllusionsIndex.org (@UofGCSPE)