Liars and Outliers (37 page)

Read Liars and Outliers Online

Authors: Bruce Schneier

BOOK: Liars and Outliers
11.5Mb size Format: txt, pdf, ePub

(13)
Neuroscience is starting
to make inroads into that question, too.

(14)
The
Ultimatum game
was first developed in 1982, and has been replicated repeatedly by different researchers using different variants in different cultures; there are hundreds of academic papers about the Ultimatum game.

Here's
how the game works
. Two strangers are put in separate rooms and told they will divide a pot of money between them. They can't meet each other, and they can't communicate in any way. Instead, one of the subjects gets to divide the money any way he wants. That division is shown to the second subject, who gets to either accept or reject the division. If he accepts it, both subjects get their shares. If he rejects the division, neither subject gets a share. After this single division task, the experiment ends, and the two subjects leave via separate doors, never to meet.

Game theory predicts, and a rational economic analysis agrees, that the first player will make the most unfair division possible, and that the second player will accept that unfair division. Here's the logic. The second player is smart to accept any division, even the most lopsided one, because some money is better than no money. And the first player, knowing that the second player will accept any division, is smart to offer him the most lopsided division possible. So if there's $20 to divide, the first player will propose a $19/$1 split and the second player will accept it.

That makes sense on paper, but people aren't like that. Different experiments with this game found that first players generally offer between a third and a half of the money, and that the most frequent offer is a 50–50 split. That's right: they give money to strangers out of their own pocket, even though they are penalizing themselves economically for doing so, in an effort to be fair. Second players tend to reject divisions that are not at least reasonably fair; about half of the players
turn down offers
of less than 30%.

This experiment has been conducted with subjects from a wide variety of
cultural backgrounds
. It has been conducted with
large amounts of money
, and in places where small amounts of money make a big difference. Results are consistent.

(15)
The Dictator game is like the Ultimatum game, but with one critical difference: the second player is completely passive. The first player gets to divide the money, and both players receive their share. If the first player wants to keep all of it, he does. The second player has no say in the division or whether or not it is accepted.

In the Ultimatum game, the first player had to worry if the second player would penalize him. The Dictator game removes all of that second-guessing. The first player gets a pile of money, and hands the second player some, then keeps the rest. He is in complete control. Even in this game, people aren't as selfish as rational economic theory predicts. In one experiment, first players split the money evenly three-quarters of the time. Other experimental results are
more lopsided
than that, and the first player's division tends to be less fair than in the Ultimatum game, but not as unfair as it could be.

(16)
In the
Trust game
, the first player gets a pile of money. He can either keep it all or give a portion to the second player. Any money he gives to the second player is increased by some amount (generally 60%) by the researchers, then the second player can divide the increased result between the two players.

Assume $10 is at stake here. If the first player is entirely selfish, he keeps his $10. If he is entirely trusting, he gives it all to the second player, who ends up with $16. If the second player is entirely selfish, he keeps the $16. If he is completely fair, he gives the first player $8 and keeps $8.

Rational economic behavior predicts a very lopsided result. As in the Dictator game, the second player would be smart to give no money to the first player. And the first player, knowing this would be the second player's rational decision, would be smart to not give any money to the second player. Of course, that's
not what happens
. First players give, on average, 40% of the money to the second player. And second players, on average, give the first player back a third of the multiplied amount.

(17)
In a
Public Goods game
, each player gets a small pile of money. It's his to keep, but he can choose to pool some portion of it together with everyone else's. The researchers multiply this pool by a predetermined amount, then evenly divide it among all players.

A rational economic analysis of the game—that is, an analysis that assumes all players will be solely motivated by selfish interest or the bottom line—predicts that no one will contribute anything to the common pool; it's a smarter strategy to keep everything you have and get a portion of what everyone else contributes than it is to contribute to the common pool. But that's not what people do. Contrary to this prediction, people generally contribute 40–60% into the common pool. That is, people are generally not prepared to cooperate 100% and put themselves at the mercy of those who defect. But they're also generally not willing to be entirely selfish and not contribute anything. Stuck between those opposing poles, they more-or-less split the difference and contribute half.

(18)
One of the theories originally advanced to explain the first player's behavior in the Ultimatum game was
fear of rejection
. According to that theory, he is motivated to offer the second player a decent percentage of the total because he doesn't want the second player to penalize him by rejecting the offer. There's no rational reason for the second player to do that, but we—and presumably the first player—know he will. That explanation was proven wrong by the Dictator game.

Some researchers claim
these experiments show that humans are naturally altruistic: they seek not only to maximize their own personal benefit but also the benefit of others, even strangers.
Others claim
that the human tendency at work in the different games is an aversion to being seen as greedy, which implies that reputation is the primary motivator.

Still other researchers try to explain results in terms of evolutionary psychology: individuals who cooperate with each other have a better chance of survival than those who don't. Today, we regularly interact with people we will never see again: fellow passengers on an airplane, members of the audience at public events, everyone we meet on our vacations, almost everyone we interact with if we live in a large city. But that didn't hold true in our evolutionary history. So while the Ultimatum, Dictator, and Trust games are one-time-only, our brains function as if we have a social network of not much more than 150 people, whom we are certain to meet again and again, often enough that the quality of our interactions matters in the long run.

(19)
We naturally gravitate toward
fair solutions
, and we naturally implement them: even when dealing with strangers, and even when being fair penalizes us financially. As one paper put it, “concerns for a fair distribution originate from personal and social rules that effectively constrain self-interested behavior.”

Joseph Henrich interviewed his subjects after Ultimatum game experiments and found that they thought a lot about fairness. First players wanted to
do what was fair
. Second players accepted offers they thought were fair, and rejected offers they thought were unfair. They would rather receive no money at all than reward unfairness.

In variants of the Ultimatum and Dictator games where the first player won his position by an
act of skill
—doing better on a quiz, for example—he tended to offer less to the second player. It worked the
other way, too
; if the second player won his position in an act of skill, the first player tended to give him more.

(20)
There's a variant of the Public Goods game where subjects are allowed to spend their own money to
punish other players
; typically, it's something like $3 deducted from the punished for every $1 spent by the punisher. In one experiment, two-thirds of the subjects punished someone at least once, with the severity of the punishment rising with the severity of the non-cooperation. They did this even if they would never interact with the punished player again.

What's interesting is that
the punishment works
. Stingy players who have been punished are less stingy in future rounds of a Public Goods game—even if the punishers themselves aren't involved in those future rounds—and that behavior cascades to other players as well.

There's other research, with rewards as well as punishment, but the
results are mixed
; rewards seem to be less effective than punishment in modifying players’ behavior.

(21)
A variant of the Dictator game illustrates this. Instead of giving, the first player can take money from the second player. And in many cases, he does. The rationalization goes along the following lines. In the standard version of the Dictator game, first players understand that the game is about giving, so they figure out how much to give. In this variant, the
game is about taking
, so they think about how much to take. A variant of the Trust game, called the
Distrust game
, illustrates a similar result.

(22)
Lots of fraud is based on feigning group identity.

(23)
About three-quarters of people give
half of the money
away in the Ultimatum game, but a few keep as much as possible for themselves. The majority of us might be altruistic and cooperative, but the minority is definitely selfish and uncooperative.

(24)
To be fair, there is a minority of researchers
who are skeptical
that mirror neurons are all that big a deal.

(25)
This is called the
prototype effect
, and has ramifications far greater than this one example.

(26)
In many societies
, sharing when you have plenty obligates others to share with you when you're in need.

(27)
Notice that the four work best in increasingly larger group sizes. Direct reciprocity works best in very small groups. Indirect reciprocity works well in slightly larger groups. Network reciprocity works well in even larger groups. Group reciprocity works well in even larger groups: groups of groups. I don't know of any research that has tried to establish the different human group sizes in which these operate, and how those sizes compare to Dunbar's numbers.

(28)
The
majority belief
is that it was primarily kin selection that sparked the evolution of altruistic behavior in humans, although Martin Nowak and Edward O. Wilson have recently caused quite a stir in the evolutionary biology community by proposing group selection as the driving mechanism.
One rebuttal
to this hypothesis was signed by 137 scientists. I have no idea how this debate will turn out, but it is likely that all mechanisms have operated throughout human evolutionary history, and
reinforced each other
.

(29)
There's a lot here, and there have been
many books published
in the last few years on this general topic of neuropsychology: Michael Shermer's
The Science of Good and Evil
, Nigel Barber's
Kindness in a Cruel World
, Donald Pfaff's
The Neuroscience of Fair Play
, Martin Nowak's
SuperCooperators
, and Patricia Churchland's
Braintrust
. The last two are the best. There's also an
older book
on the topic by Matt Ridley.

Chapter 4

(1)
Very often, understanding how societal pressures work involves understanding human—and other animal—psychology in evolutionary terms, just as you might understand the function of the pelvis, the spleen, or male pattern baldness. This is
evolutionary psychology
, first proposed by Edward O. Wilson in 1975, and which has really taken off in the last couple of decades. This is a new way of looking at psychology: not as a collection of behaviors, but as a manifestation of our species’ development. It has the very real potential to revolutionize psychology by providing a meta-theoretical framework by which to integrate the entire field, just as evolution did for biology over 150 years ago.

To be fair, the validity of evolutionary psychology research is not universally accepted. Geneticist Anne Innis Dagg argues both that the
genetic science is flawed
, and that the inability to perform experiments or collect prehistoric data render the conclusions nothing more than Gould's “Just So Stories.”

However, evolutionary psychology is not only about genetic determinism. An evolutionary explanation for behavior does not equate to or imply the existence of a genetic explanation. Behaviors, especially human behaviors, are much more multifaceted than that. Certainly genes are involved in many of our psychological processes, especially those as deep-rooted as making security and trust trade-offs, but natural selection is possible with any characteristic that can be passed from parent to child. Learned characteristics, cultural characteristics, stories that illustrate model behavior, technical knowledge—all can be passed on. Evolutionary psychology is a mix of genetic and non-genetic inheritance.

(2)
It's called
ecological validity
. We are built for conditions of the past, when—for example—humans were worried about attack from large predators, not from small lead slugs from a gun 100 yards away in the dark. So the forehead protects us against blows from blunt objects, but is much less effective against bullets. Similarly, the skull is great protection for falls, but less effective against IEDs. The loss of ecological validity has meant the end of many species that could no longer adapt to changing conditions.

(3)
Of course, the cost of not paying that tax would be even more expensive. To take just one example,
Douglass North
wrote: “The inability of societies to develop effective, low-cost enforcement of contracts is the most important source of both historical stagnation and contemporary underdevelopment in the Third World.”

Other books

The Jungle Book by Rudyard Kipling
Desire by Ember Chase
The Next Sure Thing by Richard Wagamese
Mountain Moonlight by Jaci Burton
Plausible Denial by Rustmann Jr., F. W.