You are not a Gadget: A Manifesto (7 page)

BOOK: You are not a Gadget: A Manifesto
3.38Mb size Format: txt, pdf, ePub
ads

Empathy inflation can also lead to the lesser, but still substantial, evils of incompetence, trivialization, dishonesty, and narcissism. You cannot live, for example, without killing bacteria. Wouldn’t you be projecting your own fantasies on single-cell organisms that would be indifferent to them at best? Doesn’t it really become about you instead of the cause at that point? Do you go around blowing up other people’s toothbrushes? Do you think the bacteria you saved are morally equivalent to former slaves—and if you do, haven’t you diminished the status of those human beings? Even if you can follow your passion to free and protect the world’s bacteria with a pure heart, haven’t you divorced yourself from
the reality of interdependence and transience of all things? You can try to avoid killing bacteria on special occasions, but you need to kill them to live. And even if you are willing to die for your cause, you can’t prevent bacteria from devouring your own body when you die.

Obviously the example of bacteria is extreme, but it shows that the circle is only meaningful if it is finite. If we lose the finitude, we lose our own center and identity. The fable of the Bacteria Liberation Front can serve as a parody of any number of extremist movements on the left or the right.

At the same time, I have to admit that I find it impossible to come to a definitive position on many of the most familiar controversies. I am all for animal rights, for instance, but only as a hypocrite. I eat chicken, but I can’t eat cephalopods—octopus and squid—because I admire their neurological evolution so intensely. (Cephalopods also suggest an alternate way to think about the long-term future of technology that avoids certain moral dilemmas—something I’ll explain later in the book.)

How do I draw my circle? I just spend time with the various species and decide if they feel like they are in my circle or not. I’ve raised chickens and somehow haven’t felt empathy toward them. They are little more than feathery servo-controlled mechanisms compared to goats, for instance, which I have also raised, and will not eat. On the other hand, a colleague of mine, virtual reality researcher Adrian Cheok, feels such empathy with chickens that he built teleimmersion suits for them so that he could telecuddle them from work. We all have to live with our imperfect ability to discern the proper boundaries of our circles of empathy. There will always be cases where reasonable people will disagree. I don’t go around telling other people not to eat cephalopods or goats.

The border between person and nonperson might be found somewhere in the embryonic sequence from conception to baby, or in the development of the young child, or the teenager. Or it might be best defined in the phylogenetic path from ape to early human, or perhaps in the cultural history of ancient peasants leading to modern citizens. It might exist somewhere in a continuum between small and large computers. It might have to do with which thoughts you have; maybe self-reflective thoughts or the moral capacity for empathy makes you human. These are some of the many gates to personhood that have been proposed,
but none of them seem definitive to me. The borders of person-hood remain variegated and fuzzy.

Paring the Circle

Just because we are unable to know precisely where the circle of empathy should lie does not mean that we are unable to know anything at all about it. If we are only able to be approximately moral, that doesn’t mean we should give up trying to be moral at all. The term “morality” is usually used to describe our treatment of others, but in this case I am applying it to ourselves just as much.

The dominant open digital culture places digital information processing in the role of the embryo as understood by the religious right, or the bacteria in my reductio ad absurdum fable. The error is classical, but the consequences are new. I fear that we are beginning to design ourselves to suit digital models of us, and I worry about a leaching of empathy and humanity in that process.

The rights of embryos are based on extrapolation, while the rights of a competent adult person are as demonstrable as anything can be, since people speak for themselves. There are plenty of examples where it’s hard to decide where to place faith in personhood because a proposed being, while it might be deserving of empathy, cannot speak for itself.

Should animals have the same rights as humans? There are special perils when some people hear voices, and extend empathy, that others do not. If it’s at all possible, these are exactly the situations that must be left to people close to a given situation, because otherwise we’ll ruin personal freedom by enforcing metaphysical ideas on one another.

In the case of slavery, it turned out that, given a chance, slaves could not just speak for themselves, they could speak intensely and well. Moses was unambiguously a person. Descendants of more recent slaves, like Martin Luther King Jr., demonstrated transcendent eloquence and empathy.

The new twist in Silicon Valley is that some people—very influential people—believe they are hearing algorithms and crowds and other internet-supported nonhuman entities speak for themselves. I don’t hear those voices, though—and I believe those who do are fooling themselves.

Thought Experiments: The Ship of Theseus Meets the Infinite Library of Borges

To help you learn to doubt the fantasies of the cybernetic totalists, I offer two dueling thought experiments.

The first one has been around a long time. As Daniel Dennett tells it: Imagine a computer program that can simulate a neuron, or even a network of neurons. (Such programs have existed for years and in fact are getting quite good.) Now imagine a tiny wireless device that can send and receive signals to neurons in the brain. Crude devices a little like this already exist; years ago I helped Joe Rosen, a reconstructive plastic surgeon at Dartmouth Medical School, build one—the “nerve chip,” which was an early attempt to route around nerve damage using prosthetics.

To get the thought experiment going, hire a neurosurgeon to open your skull. If that’s an inconvenience, swallow a nano-robot that can perform neurosurgery. Replace one nerve in your brain with one of those wireless gadgets. (Even if such gadgets were already perfected, connecting them would not be possible today. The artificial neuron would have to engage all the same synapses—around seven thousand, on average—as the biological nerve it replaced.)

Next, the artificial neuron will be connected over a wireless link to a simulation of a neuron in a nearby computer. Every neuron has unique chemical and structural characteristics that must be included in the program. Do the same with your remaining neurons. There are between 100 billion and 200 billion neurons in a human brain, so even at only a second per neuron, this will require tens of thousands of years.

Now for the big question: Are you still conscious after the process has been completed?

Furthermore, because the computer is completely responsible for the dynamics of your brain, you can forgo the physical artificial neurons and let the neuron-control programs connect with one another through software alone. Does the computer then become a person? If you believe in consciousness, is your consciousness now in the computer, or perhaps in the software? The same question can be asked about souls, if you believe in them.

Bigger Borges

Here’s a second thought experiment. It addresses the same question from the opposite angle. Instead of changing the program running on the computer, it changes the design of the computer.

First, imagine a marvelous technology: an array of flying laser scanners that can measure the trajectories of all the hailstones in a storm. The scanners send all the trajectory information to your computer via a wireless link.

What would anyone do with this data? As luck would have it, there’s a wonderfully geeky store in this thought experiment called the Ultimate Computer Store, which sells a great many designs of computers. In fact, every possible computer design that has fewer than some really large number of logic gates is kept in stock.

You arrive at the Ultimate Computer Store with a program in hand. A salesperson gives you a shopping cart, and you start trying out your program on various computers as you wander the aisles. Once in a while you’re lucky, and the program you brought from home will run for a reasonable period of time without crashing on a computer. When that happens, you drop the computer in the shopping cart.

For a program, you could even use the hailstorm data. Recall that a computer program is nothing but a list of numbers; there must be some computers in the Ultimate Computer Store that will run it! The strange thing is that each time you find a computer that runs the hailstorm data as a program, the program does something different.

After a while, you end up with a few million word processors, some amazing video games, and some tax-preparation software—all the same program, as it runs on different computer designs. This takes time; in the real world the universe probably wouldn’t support conditions for life long enough for you to make a purchase. But this is a thought experiment, so don’t be picky.

The rest is easy. Once your shopping cart is filled with a lot of computers that run the hailstorm data, settle down in the store’s café. Set up the computer from the first thought experiment, the one that’s running a copy of your brain. Now go through all your computers and compare what each one does with what the computer from the first experiment
does. Do this until you find a computer that runs the hailstorm data as a program equivalent to your brain.

How do you know when you’ve found a match? There are endless options. For mathematical reasons, you can never be absolutely sure of what a big program does or if it will crash, but if you found a way to be satisfied with the software neuron replacements in the first thought experiment, you have already chosen your method to approximately evaluate a big program. Or you could even find a computer in your cart that interprets the motion of the hailstorm over an arbitrary period of time as equivalent to the activity of the brain program over a period of time. That way, the dynamics of the hailstorm are matched to the brain program beyond just one moment in time.

After you’ve done all this, is the hailstorm now conscious? Does it have a soul?

The Metaphysical Shell Game

The alternative to sprinkling magic dust on people is sprinkling it on computers, the hive mind, the cloud, the algorithm, or some other cybernetic object. The right question to ask is, Which choice is crazier?

If you try to pretend to be certain that there’s no mystery in something like consciousness, the mystery that is there can pop out elsewhere in an inconvenient way and ruin your objectivity as a scientist. You enter into a metaphysical shell game that can make you dizzy. For instance, you can propose that consciousness is an illusion, but by definition consciousness is the one thing that isn’t reduced if it is an illusion.

There’s a way that consciousness and time are bound together. If you try to remove any potential hint of mysteriousness from consciousness, you end up mystifying time in an absurd way.

Consciousness is situated in time, because you can’t experience a lack of time, and you can’t experience the future. If consciousness isn’t anything but a false thought in the computer that is your brain, or the universe, then what exactly
is
it that is situated in time? The present moment, the only other thing that could be situated in time, must in that case be a freestanding object, independent of the way it is experienced.

The present moment is a rough concept, from a scientific point of view, because of relativity and the latency of thoughts moving in the brain. We have no means of defining either a single global physical present moment or a precise cognitive present moment. Nonetheless, there must be
some
anchor, perhaps a very fuzzy one, somewhere, somehow, for it to be possible to even speak of it.

Maybe you could imagine the present moment as a metaphysical marker traveling through a timeless version of reality, in which the past and the future are already frozen in place, like a recording head moving across a hard disk.

If you are certain the experience of time is an illusion, all you have left is time itself.
Something
has to be situated—in a kind of metatime or something—in order for the illusion of the present moment to take place at all. You force yourself to say that time itself travels through reality. This is an absurd, circular thought.

To call consciousness an illusion is to give time a supernatural quality—maybe some kind of spooky nondeterminism. Or you can choose a different shell in the game and say that time is natural (not supernatural), and that the present moment is only a possible concept because of consciousness.

The mysterious stuff can be shuffled around, but it is best to just admit when some trace of mystery remains, in order to be able to speak as clearly as possible about the many things that can actually be studied or engineered methodically.

I acknowledge that there are dangers when you allow for the legitimacy of a metaphysical idea (like the potential for consciousness to be something beyond computation). No matter how careful you are not to “fill in” the mystery with superstitions, you might encourage some fundamentalists or new-age romantics to cling to weird beliefs. “Some dreadlocked computer scientist says consciousness might be more than a computer? Then my food supplement must work!”

But the danger of an engineer pretending to know more than he really does is the greater danger, especially when he can reinforce the illusion through the use of computation. The cybernetic totalists awaiting the Singularity are nuttier than the folks with the food supplements.

The Zombie Army

Do fundamental metaphysical—or supposedly antimetaphysical—beliefs trickle down into the practical aspects of our thinking or our personalities? They do. They can turn a person into what philosophers call a “zombie.”

Zombies are familiar characters in philosophical thought experiments. They are like people in every way except that they have no internal experience. They are unconscious, but give no externally measurable evidence of that fact. Zombies have played a distinguished role as fodder in the rhetoric used to discuss the mind-body problem and consciousness research. There has been much debate about whether a true zombie could exist, or if internal subjective experience inevitably colors either outward behavior or measurable events in the brain in some way.

BOOK: You are not a Gadget: A Manifesto
3.38Mb size Format: txt, pdf, ePub
ads

Other books

La gran caza del tiburón by Hunter S. Thompson
District 69 by Jenna Powers
Ms. Sue Has No Clue! by Dan Gutman
Seventy-Two Hours by Stringham, C. P.
Dorothy Eden by Lamb to the Slaughter
Dangerous Games by Selene Chardou
A Bone From a Dry Sea by Peter Dickinson
Terror Stash by Tracy Cooper-Posey