Read Everything Is Obvious Online
Authors: Duncan J. Watts
In real life, however, we have only one world—the one that we are living in—thus it’s impossible to make the sort of “between world” comparisons that the models say we should. It may not surprise you, therefore, that when someone uses the output of a simulation model to argue that
Harry Potter
may not be as special as everyone thinks it is,
Harry Potter
fans tend not to be persuaded. Common sense tells us that
Harry Potter
must be special—even if the half dozen or so children’s book publishers who passed on the original manuscript didn’t know it at the time—because more than 350 million people bought it. And because any model necessarily makes all manner of simplifying assumptions, whenever we have to choose between questioning common sense and questioning a model, our tendency is to do the latter.
For exactly this reason, several years ago my collaborators Matthew Salganik and Peter Dodds and I decided to try a different approach. Instead of using computer models, we would run a controlled, laboratory-style experiment in which real people made more or less the same kinds of choices that they make in the real world—in this case, between a selection of songs. By randomly assigning different people to different experimental conditions, we would effectively create the “many worlds” situation imagined in the computer models. In some conditions, people would be exposed to information about what other people were doing, but it would be up to them to decide whether or not to be influenced by the information and how. In other conditions, meanwhile, participants would be faced with exactly the same set of choices, but without any information about other participants’ decisions; thus they would be forced to behave independently. By comparing the outcomes in the “social influence” conditions with those in the “independent” condition, we would be able to observe the effects of social influence on collective outcomes directly. In particular, by running many such worlds in parallel, we would be able to measure how much of a song’s success depended on its intrinsic attributes, and how much on cumulative advantage.
Unfortunately, running such experiments is easier said than done. In psychology experiments of the kind I discussed in
the previous chapter, each “run” of the experiment involves at most a few individuals; thus conducting the entire experiment requires at most a few hundred subjects, typically undergraduate students who participate in exchange for money or course credit. The kind of experiment we had in mind, however, required us to observe how all these individual-level “nudges” added up to create differences at the collective level. In effect, we wanted to study the micro-macro problem in a lab. But to observe effects like these we would need to recruit hundreds of people for each run of the experiment, and we would need to conduct the experiment through many independent runs. Even for a single experiment, therefore, we would need thousands of subjects, and if we wanted to run multiple experiments under different conditions, we’d need tens of thousands.
In 1969, the sociologist Morris Zelditch described exactly this problem in a paper with the provocative title “Can You Really Study an Army in a Laboratory?” At the time, his conclusion was that you couldn’t—at least not literally. Therefore he advocated that sociologists should instead study how small groups worked, and rely on theory to generalize their findings to large groups. Macrosociology, in other words, like macroeconomics, couldn’t ever be an experimental discipline, simply because it would be impossible to run the relevant experiments. Coincidentally, however, the year 1969 also marked the genesis of the Internet, and in the years since, the world had changed in ways that would have been hard for Zelditch to imagine. With the social and economic activity of hundreds of millions of people migrating online, we wondered if it might be time to revisit Zelditch’s question. Perhaps, we thought, one
could
study an army in the laboratory—only this lab would be a virtual one.
15
So that’s what we did. With the help of our resident computer programmer, a young Hungarian named Peter Hausel, and some friends at Bolt media, an early social networking site for teenagers, we set up a Web-based experiment designed to emulate a “market” for music. Bolt agreed to advertise our experiment, called Music Lab, on their site, and over the course of several weeks about fourteen thousand of its members clicked through on the banner ads and agreed to participate. Once they got to our site they were asked to listen to, rate, and if they chose to, download songs by unknown bands. Some of the participants saw only the names of the songs while others also saw how many times the songs had been downloaded by previous participants. People in the latter “social influence” category were further split into eight parallel “worlds” such that they could only see the prior downloads of people in their own world. Thus if a new arrival were to be allocated (randomly) to World #1, she might see the song “She Said” by the band Parker Theory in first place. But if she were allocated instead to World #4, Parker Theory might be in tenth place and “Lockdown” by
52
Metro might be first instead.
16
We didn’t manipulate any of the rankings—all the worlds started out identically, with zero downloads. But because the different worlds were carefully kept separate, they could subsequently evolve independently of one another. This setup therefore enabled us to test the effects of social influence directly. If people know what they like regardless of what other people think, there ought not to be any difference between the social influence and independent conditions. In all cases, the same songs should win by roughly the same amount. But if people do not make decisions independently, and if cumulative
advantage applies, the different worlds within the social influence condition should look very different from one another, and they should all look different from the independent condition.
What we found was that when people had information about what other people downloaded, they were indeed influenced by it in the way that cumulative advantage theory would predict. In all the “social influence” worlds, that is, popular songs were more popular (and unpopular songs were less popular) than in the independent condition. At the same time, however, which particular songs turned out to be the most popular—the “hits”—were different in different worlds. Introducing social influence into human decision making, in other words, increased not just inequality but unpredictability as well. Nor could this unpredictability be eliminated by accumulating more information about the songs any more than studying the surfaces of a pair of dice could help you predict the outcome of a roll. Rather, unpredictability was
inherent
to the dynamics of the market itself.
Social influence, it should be noted, didn’t eliminate quality altogether: It was still the case that, on average, “good” songs (as measured by their popularity in the independent condition) did better than “bad” ones. It was also true that the very best songs never did terribly, while the very worst songs never actually won. That said, even the best songs could fail to win sometimes, while the worst songs could do pretty well. And for everything in the middle—the majority of songs that were neither the best nor the worst—virtually any outcome was possible. The song “Lockdown” by 52 Metro, for example, ranked twenty-sixth out of forty-eight in quality; yet it was the no. 1 song in one social-influence world, and fortieth in another. The “average” performance of a particular song, in other words, is only meaningful if the
variability that it exhibits from world to world is small. But it was precisely this random variability that turned out to be large. For example, by changing the format of the website from a randomly arranged grid of songs to a ranked list we found we could increase the effective strength of the social signal, thereby increasing both the inequality and unpredictability. In this “strong influence” experiment, the random fluctuations played a bigger role in determining a song’s ranking than even the largest differences in quality. Overall, a song in the Top 5 in terms of quality had only a 50 percent chance of finishing in the Top 5 of success.
Many observers interpreted our findings as a commentary on the arbitrariness of teenage music tastes or the vacuousness of contemporary pop music. But in principle the experiment could have been about any choice that people make in a social setting: whom we vote for, what we think about gay marriage, which phone we buy or social networking service we join, what clothes we wear to work, or how we deal with our credit card debt. In many cases designing these experiments is easier said than done, and that’s why we chose to study music. People like to listen to music and they’re used to downloading it from the Web, so by setting up what looked like a site for music downloads we could conduct an experiment that was not only cheap to run (we didn’t have to pay our subjects) but was also reasonably close to a “natural” environment. But in the end all that really mattered was that our subjects were making choices among competing options, and that their choices were being influenced by what they thought other people had chosen. Teenagers also were an expedient choice, because that’s mostly who was hanging around on social networking sites in 2004. But once again, there was nothing special about teenagers—as we showed
in a subsequent version of the experiment for which we recruited mostly adult professionals. As you might expect, this population had different preferences than the teenagers, and so the average performance of the songs changed slightly. Nevertheless, they were just as influenced by one another’s behavior as the teenagers were, and so generated the same kind of inequality and unpredictability.
17
What the Music Lab experiment really showed, therefore, was remarkably similar to the basic insight from Granovetter’s riot model—that when individuals are influenced by what other people are doing, similar groups of people can end up behaving in very different ways. This may not sound like a big deal, but it fundamentally undermines the kind of commonsense explanations that we offer for why some things succeed and others fail, why social norms dictate that we do some things and not others, or even why we believe what we believe. Commonsense explanations sidestep the whole problem of how individual choices aggregate to collective behavior simply by replacing the collective with a representative individual. And because we think we know why individual people do what they do, as soon as we know what happened, we can always claim that it was what this fictitious individual—“the people,” “the market,” whatever—wanted.
By pulling apart the micro-macro problem, experiments like Music Lab expose the fallacy that arises from this form of circular reasoning. Just as you can know everything about the behavior of individual neurons and still be mystified by the emergence of consciousness in the human brain, so too you could know everything about individuals in a given population—their likes, dislikes, experiences, attitudes, beliefs, hopes, and dreams—and still not be able to predict much about their collective behavior. To explain the outcome
of some social process in terms of the preferences of some fictitious representative individual therefore greatly exaggerates our ability to isolate cause and effect.
For example, if you’d asked the 500 million people who currently belong to Facebook back in 2004 whether or not they wanted to post profiles of themselves online and share updates with hundreds of friends and acquaintances about their everyday goings-on, many of them would have likely said no, and they probably would have meant it. The world, in other words, wasn’t sitting around waiting for someone to invent Facebook so that we could all join it. Rather, a few people joined it for whatever reasons and began to play around with it. Only then, because of what those people experienced through using the service as it existed back then—and even more so because of the experiences they created for one another in the course of using it—did other people began to join. And then other people joined because those people joined, and so on, until here we are today. Yet now that Facebook
is
tremendously popular, it just seems obvious that it must have been what people wanted—otherwise, why would they be using it?
This is not to say that Facebook, the company, hasn’t made a lot of smart moves over the years, or doesn’t deserve to be as successful as it is. Rather, the point is just that the explanations we give for its success are less relevant than they seem. Facebook, that is, has a particular set of features, just as
Harry Potter
and the
Mona Lisa
have their own particular sets of features, and all of them have experienced their own particular outcomes. But it does not follow that those features
caused
the outcomes in any meaningful way. Ultimately, in fact, it may simply not be possible to say
why
the
Mona Lisa
is the most famous painting in the world or
why
the
Harry Potter
books sold more than 350 million copies
within ten years, or
why
Facebook has attracted more than 500 million users. In the end, the only honest explanation may be the one given by the publisher of Lynne Truss’s surprise bestseller,
Eats, Shoots and Leaves
, who, when asked to explain its success, replied that “it sold well because lots of people bought it.”
It may not surprise you to learn that many people do not particularly like this conclusion. Most of us would be prepared to admit that our decisions are influenced by what other people think—sometimes, anyway. But it’s one thing to acknowledge that once in a while our behavior gets nudged this way or that by what other people are doing, and it’s quite another to concede that as a consequence, true explanations for the success of an author or a company, unexpected changes in social norms, or the sudden collapse of a seemingly impregnable political regime may simply lie beyond our reach. When faced with the prospect that some outcome of interest cannot be explained in terms of special attributes or conditions, therefore, a common fallback is to assume that it was instead determined by a small number of important or influential people. So it is to this topic that we turn next.