Read An Introduction to Evolutionary Ethics Online
Authors: Scott M. James
Tags: #Philosophy, #Ethics & Moral Philosophy, #General
Consider: Farmer A needs to harvest his field in order to have enough food for winter, but Farmer A cannot do it alone. Farmer A's neighbor Farmer B (no relation) has an interest in Farmer A's crops, but being a rather disagreeable fellow, Farmer B has no interest in helping Farmer A. Farmer B, however, has problems of his own. Farmer B needs to harvest
his
field in order to have enough food for winter, but can't do it alone. Farmer A has an interest in Farmer B's crops, but Farmer A, being a disagreeable fellow himself, has no interest in helping Farmer B.
Now it should be glaringly obvious what Farmer A and Farmer B ought to do:
agree to help each other out!
If Farmer B agrees to help Farmer A harvest his crops this week, then Farmer A should agree to help Farmer B harvest
his
crops next week. Thus, at the end of two weeks, both have enough food for winter. It's not as if they have to be friends or even like each other. As they say, it's just business. But a business that delivers real payoffs. For together, Farmer A and Farmer B do substantially better than, for example, Farmer C and Farmer D who
cannot
agree to help each other out. (In fact, in this case the failure to agree might cost Farmer C and Farmer D their lives!) The biologist Robert Trivers (1971) described this phenomenon as
reciprocal altruism
. Paralleling what Hamilton had done for inclusive fitness, Trivers argued that reciprocal altruism would evolve across the biological realm provided that certain conditions were met. First and foremost, the cost of providing a benefit to a non-relative
now
must be reliably outweighed by the reciprocation of some
future
benefit. (The other conditions largely concern an organism's ability to keep track of the relevant facts, for example who gave what to whom and when. Alas, this is going to exclude a rather wide swath of the biological population.) According to field biologists, where these conditions have been met, we observe instances of reciprocal altruism.
For example, the pattern of food-sharing in vampire bats indicates that while the majority of food-sharing occurs between mother and pup (roughly 70 percent), a substantial percentage of food-sharing (some 30 percent) occurs between non-relatives. Evidently, inclusive fitness is not the only force at work here. Closer study reveals that food-sharing among non-relatives is a direct function of past associations (Wilkinson 1990). The more likely it is that a given vampire bat (let's call her X) has shared food with another bat (let's call her Y) in the past, the more likely Y will be to assist X in the future. The vampire bats have, in effect, a buddy system. And preserving this buddy system is a matter of life or death: two nights without food is pretty much fatal for vampire bats.
Perhaps the most vivid example of reciprocal altruism in non-human animals is the grooming behavior observed in primates and monkeys. For us, the maxim “You scratch my back, and I'll scratch yours” is a figure of speech; for some primates and monkeys it's a serious request. A vervet monkey, for example, has to deal constantly with external parasites, some of which can cost him his life. But he can't reach all the parts of his body that might be vulnerable (ever tried putting sunscreen on the middle of your back?). So he needs a
groomer
, another monkey who will spend thirty minutes or so carefully picking out parasites from his head and back. Thirty minutes might not seem like a lot of time, but it's time that could be spent hunting or foraging, attracting potential mates or caring for young – in other words, advancing his
own
reproductive fitness. If grooming behavior occurred strictly within the family, then one need only appeal to the processes of inclusive fitness. But biologists routinely observe monkeys grooming
non-relatives
. Why? As in the case of vampire bats, something else is going on here. What's going on, according to biologists, is reciprocal altruism. In study after study (most recently, Schino 2007), primatologists observe that whether one monkey (P) grooms another monkey (Q)
now
is directly related to whether Q has groomed P in the past. Moreover, the length of time spent grooming is proportional to the time spent in past exchanges.
In another study, anthropologist Craig Packer (1977) showed that whether or not a vervet monkey (R) was disposed to assist an unrelated monkey (S) calling out for help is directly related to whether S had groomed R in the recent past. If S
had
recently groomed R, R was far more likely to look around and move in the direction of S's calls of distress. By contrast, if S had
not
recently groomed R, R simply ignored the calls. (Interestingly, this discrepancy does not appear among kin; there, calls for help are responded to whether or not grooming has taken place.) So it appears that vervet monkeys are “keeping score.” And for good reason: doing favors for one's neighbors pays. Equally important are the
costs.
With the exception of a few dominant individuals at the top of the social hierarchy, vervet monkeys that do not return the “grooming favor” significantly increase their chances of contracting a disease.
As a general rule, then,
mutual cooperation
is better for everyone involved than
mutual defection
. Mutual cooperation, we'll say, consists of individuals benefiting others in return for some future benefit. Mutual defection consists of individuals refusing to benefit others in return for some future benefit. If Farmer A and Farmer B aren't willing to assist each other, then Farmer A and Farmer B face desperate futures. Clearly, mutual cooperation is a far better alternative. But this is not the end of the matter.
Although mutual cooperation yields higher returns for everyone than mutual defection,
any individual
stands to gain even more under a different arrangement: she defects while others cooperate. This is the fabled
free-rider
. If Farmer B helps Farmer A harvest the latter's crops but Farmer A does not return the favor, then Farmer A has received a sizable benefit without having to pay the cost (of returning the favor). If, that is, this were a one-time affair (because, let's say, Farmer A immediately packed up his harvest and moved to the other side of the continent), then we'd have to say that Farmer A did better under this arrangement than under mutual cooperation. As far as Farmer B is concerned, this arrangement is
even worse
than mutual defection since he made a sizable sacrifice on Farmer A's behalf but received nothing in return. We can thus add two more general rules to our list. First, the best arrangement for
any individual
is one in which she defects (i.e., receives help, but doesn't help others) while others cooperate. Second, the worst arrangement for any individual is one in which she cooperates while others defect.
Perhaps the best way to appreciate the intricacies of reciprocal exchanges is by considering the game “Prisoner's Dilemma,” first developed in the 1950s by Merrill Flood and Melvin Dresher of the Rand Corporation. The game itself can be played with money, M&Ms, mating partners, whatever – so long as there is some benefit each participant desires. In the original example, Jack and Jill are arrested (for looting a store, let's say) and placed in separate holding cells. Although Jack and Jill were jointly participating in the looting, Jack and Jill do not know each other. The police make the following offer to Jack:
If you identify Jill as the perpetrator of the crime and Jill refuses to talk, I will release you right now and, with your eye-witness testimony, charge Jill with the maximum penalty (ten years behind bars). If you refuse to talk and Jill identifies
you
as the perpetrator, I will charge you with the maximum penalty and I'll release Jill right now. If you identify Jill as the perpetrator and Jill identifies you as the perpetrator, I'll see to it that each of you receives five years behind bars. If both of you refuse to talk, I can only charge each of you with the minimum penalty (two years behind bars). You think about what you want to do while I go down the hall and make the same offer to Jill.
Figure 2.1
illustrates the various “payoffs” for Jack and Jill.
Figure 2.1
A Prisoner's Dilemma payoff schedule for Jack and Jill
If we assume that Jack wants to avoid as much jail time as possible and Jill wants to avoid as much jail time as possible, what should Jack do? Well, let's think about this. If (unbeknownst to Jack) Jill decides to STAY SILENT, then Jack would do better to IDENTIFY JILL, since going free beats two years in jail. If Jill IDENTIFIES JACK, then – again – Jack would do better to IDENTIFY JILL, since five years in jail beats ten years in jail. In other words,
whatever Jill decides to do, Jack does better DEFECTING
. According to game theorists, DEFECTING is said to “strictly dominate” under these conditions; that is, under all conditions, DEFECTING maximizes an individual's interests. So what makes the Prisoner's Dilemma a
dilemma
? This comes into focus when we turn our attention to Jill.
We're assuming that Jill is just like Jack in that she wants to avoid as much jail time as possible. And, by hypothesis, Jill is offered the same deal as Jack. If Jill goes through the same deliberative processes as Jack, then she, too, will recognize that DEFECTING strictly dominates as a strategy: whatever Jack does, she does better DEFECTING. But if Jill acts on this strategy and Jack acts on this strategy, then both end up worse than if they had both STAYED SILENT. For surely Jack and Jill would each rank two years in jail ahead of five years in jail. The dilemma that the Prisoner's Dilemma so elegantly raises is this: rational calculation recommends DEFECTING, but when everyone calculates in this way, when everyone DEFECTS,
everyone does worse than he or she could have done
. When everyone goes for the top, everyone ends up near the bottom.
Putting the point more generally, we can see that from the perspective of any thoughtful individual, defection will always be the most tempting option. By defecting, you at least have the chance of exploiting your neighbors' help; by cooperating, you give up that chance. Moreover, by defecting, you protect yourself from being exploited by others (I mean, who can you trust?). Cooperation, by contrast, almost always comes with the risk of giving without getting in return. And in an unforgiving environment, where resources are scarce and time is limited, giving without getting in return can exact a heavy price. But this way of thinking, when adopted by all, drives everyone down: a group of strictly rational individuals who all appreciate the payoffs of defecting and act accordingly are going to be considerably worse off than a group of individuals who are, by some means, committed to cooperating. In other words, such social environments appear open to invasion by individuals capable of engaging in ongoing cooperative exchanges.
Trivers' hypothesis was that natural selection seized on mutations disposing individuals to cooperate, if only occasionally. If we assume that, in a particular environment, the cost–benefit ratios are relatively stable and opportunities for cooperation are recurrent, the adaptive pressure is there for a kind of reciprocal altruism to evolve. A genetic mutation that disposes an organism to enter into cooperative exchanges with others will evolve if such exchanges can regularly be preserved. But this is much easier said than (biologically) done. As biologists are quick to point out, the Prisoner's Dilemma (despite its elegance – or perhaps because of it) can distract us from all the intricacies and complexities of real-world exchanges, in both the human and the non-human realms. Perhaps the most apparent point is the fact that
single
exchanges between strangers with little chance of future interaction are surely the exception and not the rule. Even among nomadic animals, in-group interactions will be frequent and participants familiar. This puts new constraints on how a Prisoner's Dilemma-type game is played. Furthermore, it potentially changes the payoffs for each player. For example, in
iterated
games there may be future costs associated with defecting when another cooperates that do not arise in single exchanges. (Think of the difference between a situation in which Farmer A “flees the scene” of defection, as in our original example, and a situation in which Farmer A defects but remains in proximity to Farmer B. The latter situation is, you might say, combustible.) In the next chapter we'll explore these details more fully. More specifically, we'll look at the engineering ways in which evolution may have solved the problem of preserving cooperative exchanges – at least in humans. This will move us decidedly into the terrain of the moral.
2.7 Conclusion
My aim in this chapter has been to clarify and support the following idea: the theory of natural selection has the potential to explain at least some of the helping behavior we observe in the world. To the extent that human instances of such behavior amount to
moral
behavior, then evolution can (to that extent) explain a piece of human morality. For example, you might insist that we have strict moral obligations to our family members; this may be evidenced by our strong emotional bond to their well-being. The theory of inclusive fitness, by redirecting our focus to the gene's-eye level, may provide an explanation for
why
we tend to think that we have these strong moral obligations to our family members: such thoughts, ignited by strong emotions, reliably disposed our ancestors to care for and protect relatives. And by caring for and protecting our relatives we were, in a sense, caring for and protecting copies of our genes. A strong moral commitment to one's family, after all, has high biological payoff.