You are not a Gadget: A Manifesto (9 page)

BOOK: You are not a Gadget: A Manifesto
6.7Mb size Format: txt, pdf, ePub
ads

Another way to put this is that there might be some form of creativity other than selection. I certainly don’t know, but it seems pointless to insist that what we already understand must suffice to explain what we don’t understand.

What I’m struck by is the lack of intellectual modesty in the computer science community. We are happy to enshrine into engineering designs mere hypotheses—and vague ones at that—about the hardest and most profound questions faced by science, as if we already possess perfect knowledge.

If it eventually turns out that there is something about an individual human mind that is different from what can be achieved by a noosphere, that “special element” might potentially turn out to have any number of qualities. It is possible that we will have to await scientific advances that will only come in fifty, five hundred, or five thousand years before we can sufficiently appreciate our own brains.

Or it might turn out that a distinction will forever be based on principles we cannot manipulate. This might involve types of computation that are unique to the physical brain, maybe relying on forms of causation that depend on remarkable and nonreplicable physical conditions. Or it might involve software that could only be created by the long-term work of evolution, which cannot be reverse-engineered or mucked with in any accessible way. Or it might even involve the prospect, dreaded by some, of dualism, a reality for consciousness as apart from mechanism.

The point is that we don’t know. I love speculating about the workings of the brain. Later in the book, I’ll present some thoughts on how to use computational metaphors to at least vaguely imagine how a process like meaning might work in the brain. But I would abhor anyone using my speculations as the basis of a design for a tool to be used by real people. An aeronautical engineer would never put passengers in a plane based on an untested, speculative theory, but computer scientists commit analogous sins all the time.

An underlying problem is that technical people overreact to religious extremists. If a computer scientist says that we don’t understand how the brain works, will that empower an ideologue to then claim that some particular religion has been endorsed? This is a real danger, but over-claiming by technical people is the greater danger, since we end up confusing ourselves.

It Is Still Possible to Get Rid of Crowd Ideology in Online Designs

From an engineering point of view, the difference between a social networking site and the web as it existed before such sites were introduced is a matter of small detail. You could always create a list of links to your friends on your website, and you could always send e-mails to a circle of friends announcing whatever you cared to. All that the social networking services offer is a prod to use the web in a particular way, according to a particular philosophy.

If anyone wanted to reconsider social network designs, it would be easy enough to take a standoffish approach to describing what goes on between people. It could be left to people to communicate what they want to say about their relationships in their own way.

If someone wants to use words like “single” or “looking” in a self-description, no one is going to prevent that. Search engines will easily find instances of those words. There’s no need for an imposed, official category.

If you read something written by someone who used the term “single” in a custom-composed, unique sentence, you will inevitably get a first whiff of the subtle experience of the author, something you would not get from a multiple-choice database. Yes, it would be a tiny bit more work for everyone, but the benefits of semiautomated self-presentation are illusory. If you start out by being fake, you’ll eventually have to put in twice the effort to undo the illusion if anything good is to come of it.

This is an example of a simple way in which digital designers could choose to be modest about their claims to understand the nature of human beings. Enlightened designers leave open the possibility of either metaphysical specialness in humans or in the potential for unforeseen creative processes that aren’t explained by ideas like evolution
that we already believe we can capture in software systems. That kind of modesty is the signature quality of being human-centered.

There would be trade-offs. Adopting a metaphysically modest approach would make it harder to use database techniques to create instant lists of people who are, say, emo, single, and affluent. But I don’t think that would be such a great loss. A stream of misleading information is no asset.

It depends on how you define yourself. An individual who is receiving a flow of reports about the romantic status of a group of friends must learn to think in the terms of the flow if it is to be perceived as worth reading at all. So here is another example of how people are able to lessen themselves so as to make a computer seem accurate. Am I accusing all those hundreds of millions of users of social networking sites of reducing themselves in order to be able to use the services? Well, yes, I am.

I know quite a few people, mostly young adults but not all, who are proud to say that they have accumulated thousands of friends on Face-book. Obviously, this statement can only be true if the idea of friendship is reduced. A real friendship ought to introduce each person to unexpected weirdness in the other. Each acquaintance is an alien, a well of unexplored difference in the experience of life that cannot be imagined or accessed in any way but through genuine interaction. The idea of friendship in database-filtered social networks is certainly reduced from that.

It is also important to notice the similarity between the lords and peasants of the cloud. A hedge fund manager might make money by using the computational power of the cloud to calculate fantastical financial instruments that make bets on derivatives in such a way as to invent out of thin air the phony virtual collateral for stupendous risks. This is a subtle form of counterfeiting, and is precisely the same maneuver a socially competitive teenager makes in accumulating fantastical numbers of “friends” on a service like Facebook.

Ritually Faked Relationships Beckon to Messiahs Who May Never Arrive

But let’s suppose you disagree that the idea of friendship is being reduced, and are confident that we can keep straight the two uses of the
word, the old use and the new use. Even then one must remember that the customers of social networks are not the members of those networks.

The real customer is the advertiser of the future, but this creature has yet to appear in any significant way as this is being written. The whole artifice, the whole idea of fake friendship, is just bait laid by the lords of the clouds to lure hypothetical advertisers—we might call them messianic advertisers—who could someday show up.

The hope of a thousand Silicon Valley start-ups is that firms like Face-book are capturing extremely valuable information called the “social graph.” Using this information, an advertiser might hypothetically be able to target all the members of a peer group just as they are forming their opinions about brands, habits, and so on.

Peer pressure is the great power behind adolescent behavior, goes the reasoning, and adolescent choices become life choices. So if someone could crack the mystery of how to make perfect ads using the social graph, an advertiser would be able to design peer pressure biases in a population of real people who would then be primed to buy whatever the advertiser is selling for their whole lives.

The situation with social networks is layered with multiple absurdities. The advertising idea hasn’t made any money so far, because ad dollars appear to be better spent on searches and in web pages. If the revenue never appears, then a weird imposition of a database-as-reality ideology will have colored generations of teen peer group and romantic experiences for no business or other purpose.

If, on the other hand, the revenue does appear, evidence suggests that its impact will be truly negative. When Facebook has attempted to turn the social graph into a profit center in the past, it has created ethical disasters.

A famous example was 2007’s Beacon. This was a suddenly imposed feature that was hard to opt out of. When a Facebook user made a purchase anywhere on the internet, the event was broadcast to all the so-called friends in that person’s network. The motivation was to find a way to package peer pressure as a service that could be sold to advertisers. But it meant that, for example, there was no longer a way to buy a surprise birthday present. The commercial lives of Facebook users were no longer their own.

The idea was instantly disastrous, and inspired a revolt. The MoveOn network, for instance, which is usually involved in electoral politics, activated its huge membership to complain loudly. Facebook made a quick retreat.

The Beacon episode cheered me, and strengthened my sense that people are still able to steer the evolution of the net. It was one good piece of evidence against metahuman technological determinism. The net doesn’t design itself. We design it.

But even after the Beacon debacle, the rush to pour money into social networking sites continued without letup. The only hope for social networking sites from a business point of view is for a magic formula to appear in which some method of violating privacy and dignity becomes acceptable. The Beacon episode proved that this cannot happen too quickly, so the question now is whether the empire of Facebook users can be lulled into accepting it gradually.

The Truth About Crowds

The term “wisdom of crowds” is the title of a book by James Surowiecki and is often introduced with the story of an ox in a marketplace. In the story, a bunch of people all guess the animal’s weight, and the average of the guesses turns out to be generally more reliable than any one person’s estimate.

A common idea about why this works is that the mistakes various people make cancel one another out; an additional, more important idea is that there’s at least a little bit of correctness in the logic and assumptions underlying many of the guesses, so they center around the right answer. (This latter formulation emphasizes that individual intelligence is still at the core of the collective phenomenon.) At any rate, the effect is repeatable and is widely held to be one of the foundations of both market economies and democracies.

People have tried to use computing clouds to tap into this collective wisdom effect with fanatic fervor in recent years. There are, for instance, well-funded—and prematurely well-trusted—schemes to apply stock market-like systems to programs in which people bet on the viability of answers to seemingly unanswerable questions, such as when terrorist
events will occur or when stem cell therapy will allow a person to grow new teeth. There is also an enormous amount of energy being put into aggregating the judgments of internet users to create “content,” as in the collectively generated link website Digg.

How to Use a Crowd Well

The reason the collective can be valuable is precisely that its peaks of intelligence and stupidity are not the same as the ones usually displayed by individuals.

What makes a market work, for instance, is the marriage of collective and individual intelligence. A marketplace can’t exist only on the basis of having prices determined by competition. It also needs entrepreneurs to come up with the products that are competing in the first place.

In other words, clever individuals, the heroes of the marketplace, ask the questions that are answered by collective behavior. They bring the ox to the market.

There are certain types of answers that ought not be provided by an individual. When a government bureaucrat sets a price, for instance, the result is often inferior to the answer that would come from a reasonably informed collective that is reasonably free of manipulation or runaway internal resonances. But when a collective designs a product, you get design by committee, which is a derogatory expression for a reason.

Collectives can be just as stupid as any individual—and, in important cases, stupider. The interesting question is whether it’s possible to map out where the one is smarter than the many.

There is a substantial history to this topic, and varied disciplines have accumulated instructive results. Every authentic example of collective intelligence that I am aware of also shows how that collective was guided or inspired by well-meaning individuals. These people focused the collective and in some cases also corrected for some of the common hive mind failure modes. The balancing of influence between people and collectives is the heart of the design of democracies, scientific communities, and many other long-standing success stories.

The preinternet world provides some great examples of how individual human-driven quality control can improve collective intelligence. For
example, an independent press provides tasty news about politicians by journalists with strong voices and reputations, like the Watergate reporting of Bob Woodward and Carl Bernstein. Without an independent press, composed of heroic voices, the collective becomes stupid and unreliable, as has been demonstrated in many historical instances—most recently, as many have suggested, during the administration of George W Bush.

Scientific communities likewise achieve quality through a cooperative process that includes checks and balances, and ultimately rests on a foundation of goodwill and “blind” elitism (blind in the sense that ideally anyone can gain entry, but only on the basis of a meritocracy). The tenure system and many other aspects of the academy are designed to support the idea that individual scholars matter, not just the process or the collective.

Yes, there have been plenty of scandals in government, the academy, and the press. No mechanism is perfect. But still here we are, having benefited from all of these institutions. There certainly have been plenty of bad reporters, self-deluded academic scientists, incompetent bureaucrats, and so on. Can the hive mind help keep them in check? The answer provided by experiments in the preinternet world is yes—but only if some signal processing has been placed in the loop.

Signal processing is a bag of tricks engineers use to tweak flows of information. A common example is the way you can set the treble and bass on an audio signal. If you turn down the treble, you are reducing the amount of energy going into higher frequencies, which are composed of tighter, smaller sound waves. Similarly, if you turn up the bass, you are heightening the biggest, broadest waves of sound.

BOOK: You are not a Gadget: A Manifesto
6.7Mb size Format: txt, pdf, ePub
ads

Other books

Learning the Ropes by C. P. Mandara
Silver and Spice by Jennifer Greene
Hand in Glove by Robert Goddard
YazminaLion Are by Lizzie Lynn Lee
The Last Revolution by Carpenter, R.T.
Paris, My Sweet by Amy Thomas
Let's Rock! by Sheryl Berk
Dead Nolte by Borne Wilder
Harvest of Rubies by Tessa Afshar
Midnight Shadows by Ella Grace