Just look at the data!
What do a Hungarian-American mathematician, a German-Austrian economists and an Israeli-American psychologist have to do with the Corona crisis?
(No, this is no setup for a joke, but a punchline is waiting for you nevertheless!)
John Von Neumann (the mathematician, among a lot of other things) and Oskar Morgenstern (the economist) gifted humanity the mathematically provable rules for ideal decision making under risky condition in 1947.
We know how an perfectly rational decision-maker has to behave: weigh the outcomes by how likely they are going to happen and pick the route that promises the greatest expected utility.
The problem? We have since learned that human beings don’t act like that! Not at all! There are entire books written on how we fail to be rational.
Daniel Kahneman (the psychologist, at least according to me) has done just that. “Thinking, Fast and Slow“, depending on how you look at it, is either a scourging rebuke of the idea of human rationality or a wonderful journey of introspection and getting to know yourself and your flaws.
If you have ever complaint about people “not being rational”, I invite you to read that book, reflect on your own flaws and contemplate this idea: we are irrational automatons most of the time, moist robots if you will. That includes you as well!
We are really good at rationalizing our behavior, we are really bad at behaving rationally.
I will probably spend a few future posts on the most pronounced human oddities of “reasoning”: confirmation bias, cognitive dissonance, affection bias, probability neglect certainty bias and our multiple selves. Scott Adams‘s “How to Fail at Almost Everything and Still Win Big” has multiple pages full of nothing but the names of known biases of human reasoning. There is ample material for self-exploration and for realizing how other people use your flaws in reasoning to make you do things, they want you to do. Or put alternatively: how to make them do things, you want them to do. SPOILER: that almost never entails presenting an iron-clad logical argument.
Today, though, I’d like to showcase a specific flaw in our reasoning that is especially relevant for the current discussion about our strategy to beat Covid19: framing.
“Why do you risk the lives of people?”
You have all heard similar accusations thrown around in the discussion about how to react to the virus. This accusation aims directly at the moral foundation of the accused. What person can be so rotten to “risk” human lives?
Kahneman’s long-term collaborator, Amos Tversky tried to find out a long time ago:
This is known, I kind you not, as the “Asian disease problem“. Please join me in this little experiment. I will ask you to make a difficult choice!
Imagine that the United States is preparing for the outbreak of an unusual Asian disease, which is expected to kill [600k] people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:_
If program A is adopted, [200k] people will be saved._
program B is adopted, there is a 1/3 probability that [600k] people will be saved and a 2/3 probability that no people will be saved. [Kahneman, p.368]
Now, STOP! Make your decision. What route are you willing to go? Take your time!
Thanks. Take a deep breath! Now please consider the following pair of options:
If program A’ is adopted, [400k] people will die.
If program B’ is adopted, there is a 1/3 probability that nobody will die and a 2/3 probability that [600k] people will die.[Kahneman, p.368]
Take your time to make the decision again. Diligently.
Thank you!
If we look closely at the options, it is becomes clear that A and A’ are identical.
In both cases 200 000 people live and 400 000 people die.
B and B’ are also identical. There is a 1/3 chance that all people live and a 2/3 chance that all people die. You didn’t notice, did you?
If you did not notice, you have fallen victim to the framing effect.
As have the public-health-professionals these pairs of choices were presented to.
By a wide margin, they preferred A to B. But they also preferred B’ to A’.
We like to take the “certain bet” on saving the lives and we try to cheat death by taking the gamble in the second example.
What would a perfectly reasonable decision-maker do?
Well, she would probably flip a coin.
A, A’, B and B’ have all the same expected utility. In all choices you are expected to save 200 000 people and have 400 000 people die.
How does it feel to know that decisions, even important ones, are not governed by your rational faculties, but by your moral intuition? How does it feel to know that is true even when experts make decisions? Just as a reminder: You run into this problem with exact estimates about the consequences of your actions. Do you think we have anything like that available to plot out our reaction to COVID19? Seriously?
More importantly, how would you decide now that you know all of this?
It’s not that easy, isn’t it?
If you are having trouble answering this, you may have just learned something important: your moral intuitions are not nearly as flawless as you believed them to be. Or anywhere as useful.
Another learning: people deciding differently are probably not the monsters you image them to be.
Turns out you don’t have to be particularly rotten to take a seemingly “risky”(=uncertain) route. Joke’s on us.
One last thing…
I’d like to leave you with this insight about human beings:
An unbiased appreciation of uncertainty is a cornerstone of rationality—but it is not what people and organizations want.
Extreme uncertainty is paralyzing under dangerous circumstances, and the admission that one is merely guessing is especially unacceptable when the stakes are high. Acting on pretended knowledge is often the preferred solution. [KAHNEMAN, p.263]
Whom does that remind you of?
NOTES:
I only multiplied the numbers from [KAHNEMAN] by 1000 to get it into the area commonly discussed with respect to the COVID19 pandemic. The structure of the problem is exactly the same.
SOURCES:
[KAHNEMAN] KAHNEMAN, Daniel. Thinking, Fast and Slow. Farrar, Straus and Giroux. Kindle-Version.