The Better Angels of Our Nature: Why Violence Has Declined (49 page)

Read The Better Angels of Our Nature: Why Violence Has Declined Online

Authors: Steven Pinker

Tags: #Sociology, #Psychology, #Science, #Amazon.com, #21st Century, #Crime, #Anthropology, #Social History, #Retail, #Criminology

BOOK: The Better Angels of Our Nature: Why Violence Has Declined
11.28Mb size Format: txt, pdf, ePub
Source:
Graph adapted from Newman, 2005, p. 324.
 
Which brings us back to wars. Since wars fall into a power-law distribution, some of the mathematical properties of these distributions may help us understand the nature of wars and the mechanisms that give rise to them. For starters, power-law distributions with the exponent we see for wars do not even have a finite mean. There is no such thing as a “typical war.” We should not expect, even on average, that a war will proceed until the casualties pile up to an expected level and then will naturally wind itself down.
Also, power-law distributions are
scale-free
. As you slide up or down the line in the log-log graph, it always looks the same, namely, like a line. The mathematical implication is that as you magnify or shrink the units you are looking at, the distribution looks the same. Suppose that computer files of 2 kilobytes are a quarter as common as files of 1 kilobyte. Then if we stand back and look at files in higher ranges, we find the same thing: files of 2 megabytes are a quarter as common as files of 1 megabyte, and files of 2 terabytes are a quarter as common as files of 1 terabyte. In the case of wars, you can think of it this way. What are the odds of going from a small war, say, with 1,000 deaths, to a medium-size war, with 10,000 deaths? It’s the same as the odds of going from a medium-size war of 10,000 deaths to a large war of 100,000 deaths, or from a large war of 100,000 deaths to a historically large war of 1 million deaths, or from a historic war to a world war.
Finally, power-law distributions have “thick tails,” meaning that they have a nonnegligible number of extreme values. You will never meet a 20-foot man, or see a car driving down the freeway at 500 miles per hour. But you could conceivably come across a city of 14 million, or a book that was on the bestseller list for 10 years, or a moon crater big enough to see from the earth with the naked eye—or a war that killed 55 million people.
The thick tail of a power-law distribution, which declines gradually rather than precipitously as you rocket up the magnitude scale, means that extreme values are
extremely unlikely
but not
astronomically unlikely
. It’s an important difference. The chances of meeting a 20-foot-tall man are astronomically unlikely; you can bet your life it will never happen. But the chances that a city will grow to 20 million, or that a book will stay on the bestseller list for 20 years, is merely extremely unlikely—it probably won’t happen, but you could well imagine it happening. I hardly need to point out the implications for war. It is extremely unlikely that the world will see a war that will kill 100 million people, and less likely still that it will have one that will kill a billion. But in an age of nuclear weapons, our terrified imaginations and the mathematics of power-law distributions agree: it is not astronomically unlikely.
So far I’ve been discussing the causes of war as Platonic abstractions, as if armies were sent into war by equations. What we really need to understand is
why
wars distribute themselves as power laws; that is, what combination of psychology and politics and technology could generate this pattern. At present we can’t be sure of the answer. Too many kinds of mechanisms can give rise to power-law distributions, and the data on wars are not precise enough to tell us which is at work.
Still, the scale-free nature of the distribution of deadly quarrels gives us an insight about the drivers of war.
60
Intuitively, it suggests that
size doesn’t matter
. The same psychological or game-theoretic dynamics that govern whether quarreling coalitions will threaten, back down, bluff, engage, escalate, fight on, or surrender apply whether the coalitions are street gangs, militias, or armies of great powers. Presumably this is because humans are social animals who aggregate into coalitions, which amalgamate into larger coalitions, and so on. Yet at any scale these coalitions may be sent into battle by a single clique or individual, be it a gang leader, capo, warlord, king, or emperor.
How can the intuition that size doesn’t matter be implemented in models of armed conflict that actually generate power-law distributions?
61
The simplest is to assume that the coalitions themselves are power-law-distributed in size, that they fight each other in proportion to their numbers, and that they suffer losses in proportion to their sizes. We know that some human aggregations, namely municipalities, are power-law-distributed, and we know the reason. One of the commonest generators of a power-law distribution is preferential attachment: the bigger something is, the more new members it attracts. Preferential attachment is also known as accumulated advantage, the-rich-get-richer, and the Matthew Effect, after the passage in Matthew 25:29 that Billie Holiday summarized as “Them that’s got shall get, them that’s not shall lose.” Web sites that are popular attract more visitors, making them even more popular; bestselling books are put on bestseller lists, which lure more people into buying them; and cities with lots of people offer more professional and cultural opportunities so more people flock to them. (How are you going to keep them down on the farm after they’ve seen Paree?)
Richardson considered this simple explanation but found that the numbers didn’t add up.
62
If deadly quarrels reflected city sizes, then for every tenfold reduction in the size of a quarrel, there should be ten times as many of them, but in fact there are fewer than four times as many. Also, in recent centuries wars have been fought by states, not cities, and states follow a log-normal distribution (a warped bell curve) rather than a power law.
Another kind of mechanism has been suggested by the science of complex systems, which looks for laws that govern structures that are organized into similar patterns despite being made of different stuff. Many complexity theorists are intrigued by systems that display a pattern called self-organized criticality. You can think of “criticality” as the straw that broke the camel’s back: a small input causes a sudden large output. “Self-organized” criticality would be a camel whose back healed right back to the exact strength at which straws of various sizes could break it again. A good example is a trickle of sand falling onto a sandpile, which periodically causes landslides of different sizes; the landslides are distributed according to a power law. An avalanche of sand stops at a point where the slope is just shallow enough to be stable, but the new sand trickling onto it steepens the slope and sets off a new avalanche. Earthquakes and forest fires are other examples. A fire burns a forest, which allows trees to grow back at random, forming clusters that can grow into each other and fuel another fire. Several political scientists have developed computer simulations that model wars on an analogy to forest fires.
63
In these models, countries conquer their neighbors and create larger countries in the same way that patches of trees grow into each other and create larger patches. Just as a cigarette tossed in a forest can set off either a brushfire or a conflagration, a destabilizing event in the simulation of states can set off either a skirmish or a world war.
In these simulations, the destructiveness of a war depends mainly on the territorial size of the combatants and their alliances. But in the real world, variations in destructiveness also depend on the resolve of the two parties to keep a war going, with each hoping that the other will collapse first. Some of the bloodiest conflicts in modern history, such as the American Civil War, World War I, the Vietnam War, and the Iran-Iraq War, were wars of attrition, where both sides kept shoveling men and matériel into the maw of the war machine hoping that the other side would exhaust itself first.
John Maynard Smith, the biologist who first applied game theory to evolution, modeled this kind of standoff as a War of Attrition game.
64
Each of two contestants competes for a valuable resource by trying to outlast the other, steadily accumulating costs as he waits. In the original scenario, they might be heavily armored animals competing for a territory who stare at each other until one of them leaves; the costs are the time and energy the animals waste in the standoff, which they could otherwise use in catching food or pursuing mates. A game of attrition is mathematically equivalent to an auction in which the highest bidder wins the prize and
both
sides have to pay the loser’s low bid. And of course it can be analogized to a war in which the expenditure is reckoned in the lives of soldiers.
The War of Attrition is one of those paradoxical scenarios in game theory (like the Prisoner’s Dilemma, the Tragedy of the Commons, and the Dollar Auction) in which a set of rational actors pursuing their interests end up worse off than if they had put their heads together and come to a collective and binding agreement. One might think that in an attrition game each side should do what bidders on eBay are advised to do: decide how much the contested resource is worth and bid only up to that limit. The problem is that this strategy can be gamed by another bidder. All he has to do is bid one more dollar (or wait just a bit longer, or commit another surge of soldiers), and he wins. He gets the prize for close to the amount you think it is worth, while you have to forfeit that amount too, without getting anything in return. You would be crazy to let that happen, so you are tempted to use the strategy “Always outbid him by a dollar,” which he is tempted to adopt as well. You can see where this leads. Thanks to the perverse logic of an attrition game, in which the loser pays too, the bidders may keep bidding after the point at which the expenditure exceeds the value of the prize. They can no longer win, but each side hopes not to lose as much. The technical term for this outcome in game theory is “a ruinous situation.” It is also called a “Pyrrhic victory”; the military analogy is profound.
One strategy that can evolve in a War of Attrition game (where the expenditure, recall, is in time) is for each player to wait a
random
amount of time, with an average wait time that is equivalent in value to what the resource is worth to them. In the long run, each player gets good value for his expenditure, but because the waiting times are random, neither is able to predict the surrender time of the other and reliably outlast him. In other words, they follow the rule: At every instant throw a pair of dice, and if they come up (say) 4, concede; if not, throw them again. This is, of course, like a Poisson process, and by now you know that it leads to an exponential distribution of wait times (since a longer and longer wait depends on a less and less probable run of tosses). Since the contest ends when the first side throws in the towel, the contest durations will also be exponentially distributed. Returning to our model where the expenditures are in soldiers rather than seconds, if real wars of attrition were like the “War of Attrition” modeled in game theory, and if all else were equal, then wars of attrition would fall into an exponential distribution of magnitudes.
Of course, real wars fall into a power-law distribution, which has a thicker tail than an exponential (in this case, a greater number of severe wars). But an exponential can be transformed into a power law if the values are modulated by a second exponential process pushing in the opposite direction. And attrition games have a twist that might do just that. If one side in an attrition game were to leak its intention to concede in the next instant by, say, twitching or blanching or showing some other sign of nervousness, its opponent could capitalize on the “tell” by waiting just a bit longer, and it would win the prize every time. As Richard Dawkins has put it, in a species that often takes part in wars of attrition, one expects the evolution of a poker face.
Now, one also might have guessed that organisms would capitalize on the opposite kind of signal, a sign of continuing resolve rather than impending surrender. If a contestant could adopt some defiant posture that means “I’ll stand my ground; I won’t back down,” that would make it rational for his opposite number to give up and cut its losses rather than escalate to mutual ruin. But there’s a reason we call it “posturing.” Any coward can cross his arms and glower, but the other side can simply call his bluff. Only if a signal is
costl
y—if the defiant party holds his hand over a candle, or cuts his arm with a knife—can he show that he means business. (Of course, paying a self-imposed cost would be worthwhile only if the prize is especially valuable to him, or if he had reason to believe that he could prevail over his opponent if the contest escalated.)
In the case of a war of attrition, one can imagine a leader who has a
changing
willingness to suffer a cost over time, increasing as the conflict proceeds and his resolve toughens. His motto would be: “We fight on so that our boys shall not have died in vain.” This mindset, known as loss aversion, the sunk-cost fallacy, and throwing good money after bad, is patently irrational, but it is surprisingly pervasive in human decision-making.
65
People stay in an abusive marriage because of the years they have already put into it, or sit through a bad movie because they have already paid for the ticket, or try to reverse a gambling loss by doubling their next bet, or pour money into a boondoggle because they’ve already poured so much money into it. Though psychologists don’t fully understand why people are suckers for sunk costs, a common explanation is that it signals a public commitment. The person is announcing: “When I make a decision, I’m not so weak, stupid, or indecisive that I can be easily talked out of it.” In a contest of resolve like an attrition game, loss aversion could serve as a costly and hence credible signal that the contestant is not about to concede, preempting his opponent’s strategy of outlasting him just one more round.

Other books

The Violet Hour: A Novel by Hill, Katherine
Fever City by Tim Baker
Seducing the Governess by Margo Maguire
Prince of Dharma by Ashok Banker
The 7th Woman by Molay, Frédérique
Blazing Bedtime Stories by Kimberly Raye, Leslie Kelly, Rhonda Nelson