An Introduction to Evolutionary Ethics (11 page)

Read An Introduction to Evolutionary Ethics Online

Authors: Scott M. James

Tags: #Philosophy, #Ethics & Moral Philosophy, #General

BOOK: An Introduction to Evolutionary Ethics
2.7Mb size Format: txt, pdf, ePub

3.4 Conclusion My aim in this chapter has been to describe, first, the important features of moral judgment and, second, an evolutionary explanation for those features. Making a moral judgment involves, first and foremost, an appeal to
prohibitions
. And these prohibitions seem to transcend merely legal or cultural norms. Moreover, our recognition of these norms involves being moved to act in accordance with them. The evolutionary account purports to explain these (and other) features by highlighting the advantages of cooperating in social interactions. But the value of cooperating, so the story goes, cannot be secured by merely having creatures like us
desire
cooperation. Instead, a system of moral judgment, with all of the its attendant features, evolved as powerful mechanism to keep us in line. This explanation receives further support from analyses of the structure of punishment. In the next chapter, we look at how punishment may have figured in moral thought and biological evolution.

Further Reading Frank, Robert (1988)
Passions within Reason: The Strategic Role of the Emotions
(Norton).

Joyce, Richard (2006)
The Evolution of Morality
(MIT Press).

Ruse, Michael (1995) Evolutionary Ethics: A Phoenix Arisen. In P. Thomson (ed.),
Issues in Evolutionary Ethics
(SUNY Press).

Wilson, E.O. (1978)
On Human Nature
(Harvard University Press).

Wright, Robert (1995)
The Moral Animal: Why We Are the Way We Are. The New Science of Evolutionary Psychology
(Vintage).

Chapter 4
Just Deserts

One sole desire, one passion now remains
To keep life's fever still within his veins,
Vengeance!

(Thomas Moore, Poetical Works) As we noted in the previous chapter, one of the defining features of moral thought is its connection to punishment: judging that someone has acted wrongly involves judging that he deserves to be punished. Any account of the development of moral thought must make sense of this feature. In this chapter we take up the issue of punishment, as well as some associated issues: reputation and moral emotion. This chapter, like the next, draws on a range of empirical work in economics and psychology. Among the questions researchers are focusing on are the following:
When
do people punish?
Why
do they punish?
How
might punishment benefit an individual or a group? And how is punishment related to one's reputation and feelings of guilt?

The evolutionary account of morality outlined in the previous chapter provides a rough explanation for this phenomenon. First, if individuals regarded the violation of prohibitions as punishable offenses, then this would keep both oneself and others in line. If I know that my community is likely to deprive me of something I value if I act in ways that are prohibited, then this just reinforces my commitment to do the right thing. And likewise for every other member of the community. In this way, a common framework is established – or, if you like, a
balance
. The threat of punishment acts as a leverage against the temptation to defect.

But this exposes a limitation in pure Prisoner's Dilemma games: in the single-play version, defecting delivers either a big pay-off to you or a paltry pay-off to everybody. But social exchanges in real life, if they resemble the Prisoner's Dilemma at all, resemble a game played over and over. To see how this alters the outcomes, put yourself in the following kind of situation.

Suppose you and I are among a group of individuals playing Prisoner's Dilemma games over the course of a year, where payoffs are made in cash (and if you like, to up the stakes, imagine that this is your only source of income). Assume there are no restrictions on who plays whom or how many times a game is played. Before play begins, however, we have a week to interact with our fellow participants. What would you look for? What kind of person would strike you as an attractive counterpart? What kind of person would you avoid? Would you try to make explicit arrangements? Suppose that you and I decided to play our first round together. We both promise to cooperate. But when play begins, I break my promise: you cooperate, but I defect. I receive a nice chunk of change and you receive nothing. How would you feel? What would be your first response? Well, you might begin by slinging a few choice words my way. But how would you play the
next
round? One option would be to play me again. But why? Surely you would be doing it out of spite; you'd be looking to give me some of my own medicine. And since you know that I'm not stupid, you know that I would
expect
you to defect. So you could expect
me
to defect in anticipation. This is beginning to look like a loser's bargain.

Instead of throwing good money after bad, the smartest thing to do after I go back on my promise is to dump me, move on, find someone new. But why stop there? Since it costs you next to nothing, you probably shouldn't hesitate to point out – to anyone who'll listen – that I'm not to be trusted. “He double-crossed me,” you would say with a sneer. And it wouldn't be long before this information got around. Now this might sound like idle gossip, but remember: with so little information to go on, participants have every reason to use that gossip in deciding how to act. What people say matters because it affects what people do.
1
It's difficult to overstate the critical role that punishment plays in a social group. It doesn't take much to trigger the drive to punish. The following experiments highlight when people punish, and some surprising benefits of doing so.

4.1 The Ultimatum Game Recent psychological studies reveal how powerful the retributive urge is. Imagine being invited to play what psychologists call the Ultimatum Game. You are given twenty one-dollar bills. You are told that you may divide those twenty dollars any way you like with a stranger in another room, someone you'll never meet – but who
is
aware of the amount of money you have to divide. You can offer him whatever you like – $1, $5, $7, $13, whatever. But once you make your offer, the stranger has this choice: he can accept the offer or he can refuse it. If he refuses the offer,
no one gets any money
. The game is over and you go home. So what would you offer? Think for a moment before reading on.

Here's what I bet (and the data suggest) you'll do. If you believe that the stranger in the other room is purely rational – that is, seeks his own economic advantage above all else – you will offer him only $1. Why? Because a purely rational actor, driven solely by his desire to maximize profits, will prefer $1 over nothing, since nothing is what he'll receive if he rejects the offer. But this is
not
the offer I bet you would make. If you're like most people, your offer would come in at around $7. But isn't this irrational on your part? Why are you offering a perfect stranger money that could be yours? The answer is simple: you believe (correctly) that others are driven by more than immediate economic gain: people are also driven by
a sense of fairness
. And this sense of fairness can drive people to
punish
others – even if it costs them personally. The reason you probably would not make a $3 offer is that you would expect the stranger to reject this offer. You know implicitly that he would rather give up $3 to show his disapproval, his righteous indignation, than take the money and be treated unfairly. Study after study has shown just this. People reject most offers under $7. This sense of fairness is so powerful that people are willing to pay to punish people who treat
other strangers
unfairly.

In a variation on the Ultimatum Game, a third-party “observer” is given $50. The observer is told that she will be observing a game between two strangers. In this game, one player, “the allocator,” has $100 that he can divide with another player, “the recipient,” any way he chooses. Unlike in the Ultimatum Game, however, the recipient has
no choice
but to accept what the allocator offers (economists call this the Dictator Game, for obvious reasons). So if the allocator gives the recipient one dollar, that's what the recipient receives. Here's the wrinkle, though. The observer has the option of stepping in
before
any money is allocated to the recipient. If the observer chooses, she can give up some of her own money to reduce the allocator's take-home pay: for every dollar the observer gives up, the allocator has to give up three. In effect, the observer has the option of
fining
the allocator – except the fine comes from her own pocket.

The results of the game are striking: the number of dollars the observer gives up (i.e. the fine) is directly proportional to the inequity. In other words, the more unequal the split, the higher the fine imposed on the allocator. In fact, observers give up money for just about any offer
lower
than $50. Against the assumption that people always seek their own best interests, these results are remarkable: here is someone giving up her own money to punish a complete stranger who has treated another complete stranger unfairly. All the observer has to do to walk away with $50 is to sit idly by while two strangers interact. But people can't sit idly by.

4.2 The Public Goods Game Behavioral economists have derived similar results from what are called “public goods” experiments. For example, Ernest Fehr and Simon Gachter (2002) recently performed a set of “public goods” experiments that allowed for punishment. Here's how the experiment works: each member of a group of four receives 20 monetary units (let's just say, dollars) and is given the opportunity to “invest” all or some or none of that money in a group project. Students are allowed to keep any money that is not invested. Notably, however, students are guaranteed a 40 percent return on their investment. So if every student invested $10, they would, as a group, earn $16 to add to the $40 they invested – a total of $56. And since group earnings are always divided evenly among members (regardless of investment, if any), each person would walk away with $24, since each person's $4 earnings plus their $10 investment are added to the $10 they did not invest. If every student invested
every
dollar, each member would walk away with $32.

Here's the thing, however: investments are anonymous. Thus I don't know what (if anything) you're investing and you don't know what (if anything) I'm investing. If I invest $5 but
everyone else
invests $20 each, then I walk away with $22.75. That's a 55 percent return on my investment! Hence, there's an incentive for each person to invest less than his neighbor (indeed, I can
lose
money when I invest much more than others). Of course, when
no one
invests his money, there's no chance to increase one's earnings.

Now, Fehr and Gachter (2002) ran two series of experiments. In one series, subjects played six rounds of the game just as it is described above, where the group makeup changes after each round. Thus, no one ever interacts with the same person twice. In the second series, the game remains the same as above except subjects have an additional option: to
punish
other specific members after each round (though punishers remain anonymous). And punishment works like this: if you decide to punish player A – because, for example, you learn that A only invested $1 whereas everyone else invested $10 – you assign points to A. For every point assigned to A, $3 is deducted from A's earnings. At the same time, $1 is deducted from
your
earnings. Economists refer to this kind of punishment as
altruistic punishment
since punishment in this case not only reduces your earnings, but also means that you cannot ever recoup anything from A since you never interact with A again. So what did the experimenters observe?

Bluntly put, punishment pays – at least in this setting. In the final round of the
no
-punishment series, three-fourths of the subjects invested $5 or less. In the final round of the punishment series, more than three-fourths of the subjects invested $15 or more. Moreover, punishment and the threat of punishment promoted a trend: investments increased from round to round. In the
no
-punishment series, investments decreased from round to round.

To be sure, the threat of punishment was not empty. More than eight out of ten subjects punished at least once; 35 percent of subjects punished in at least five out of the six rounds of the game. The punishment also followed a pattern, a pattern that parallels the Ultimatum Game findings. Fehr and Gachter found that the further a player's investments fell below the average investment of other members the more she was punished. So, for example, when a player's investment fell between $8 and $14
below
the mean cooperation level of other group members, these members paid on average $4 to punish her. When her investment fell $14 to $20 below the mean, these members paid on average $9 to punish her.

Fehr and Gachter also hypothesized that the decision to punish was mediated, at least in part, by subjects' emotions. Punishment, they suspected, resulted not so much from calculation but from contempt. Subjects were asked to imagine a situation in which they, along with two other members, invested around $16 while a fourth subject invested only $2. How would they feel about this free-rider? Half the subjects reported feeling an anger intensity of 6 or 7 (out of 7); nearly 40 percent of subjects reported an anger intensity of 5. And, not surprisingly, the intensity of anger was directly correlated with the deviation from others' average investment: the more an individual's investments fell below the average investment, the more intense the anger directed at her.

Of equal significance were the
expectations
of anger. Subjects were asked to imagine that
they
were the free-rider; how would others feel if they accidentally met them? Three-fourths of the subjects predicted that others would feel an anger intensity of 6 or 7, and a fifth of the subjects expected an anger intensity of 5. As it turns out, these expectations exceeded reality. People did not report anger intensity levels at the levels people would expect. This is significant since it suggests that we err on the side of caution when it comes to others' anger. We are keenly aware, that is, of how others may perceive our behavior.
2

4.3 Winners Don't Punish The experimental results on punishment, however, are more nuanced than my discussion has so far suggested. For example, a leading group of economists and biologists has shown that, as the title of their paper indicates, “winners don't punish” (Dreber
et al.
2008). In a variation on the Prisoner's Dilemma, subjects had three choices instead of two: cooperate, defect, or punish. Whereas defection meant gaining $1 (say) at a cost of $1 for the other person, punishment meant paying $1 for the other person to
lose
$4. Subjects played repeated games with the same person, though they were unaware of how long games would continue. What Dreber
et al.
discovered was that “the five top-ranked players, who earned the highest total payoff, have never used costly punishment” (2008: 349). Winners, it turned out, tended to play a “tit-for-tat” strategy, like the one we discussed in the previous chapter. Their response to defection was defection; losers, on the other hand, responded to defection with costly punishment. To be clear, both winners and losers expressed their disapproval of defection. It's just that the winning strategy consisted of moderate punishment (i.e., defection) instead of costly punishment.

Other books

The Lazarus Secrets by Beryl Coverdale
Turbulence by Samit Basu
Dateline: Atlantis by Lynn Voedisch
Childhood's End by Arthur C. Clarke
Case of Imagination by Jane Tesh
Dirty Eden by J. A. Redmerski
AJAYA - RISE OF KALI (Book 2) by Anand Neelakantan
La abominable bestia gris by George H. White
Accelerated by Heppner, Vaughn