The Singularity Is Near: When Humans Transcend Biology (69 page)

Read The Singularity Is Near: When Humans Transcend Biology Online

Authors: Ray Kurzweil

Tags: #Non-Fiction, #Fringe Science, #Retail, #Technology, #Amazon.com

BOOK: The Singularity Is Near: When Humans Transcend Biology
11.88Mb size Format: txt, pdf, ePub

B
ILL
:
Everything of value is fleeting
.

R
AY
:
Yes, but it gets replaced by something of even greater value
.

B
ILL
:
True, that’s why we need to keep innovating
.

The Vexing Question of Consciousness

 

If you could blow the brain up to the size of a mill and walk about inside, you would not find consciousness.

                   —G. W. L
EIBNIZ

 

Can one ever remember love? It’s like trying to summon up the smell of roses in a cellar. You might see a rose, but never the perfume.

                   —A
RTHUR
M
ILLER
8

 

At one’s first and simplest attempts to philosophize, one becomes entangled in questions of whether when one knows something, one knows that one knows it, and what, when one is thinking of oneself, is being thought about, and what is doing the thinking. After one has been puzzled and bruised by this problem for a long time, one learns not to press these questions: the concept of a conscious being is, implicitly, realized to be different from that of an unconscious object. In saying that a conscious being knows something, we are saying not only that he knows it, but that he knows that he knows it, and that he knows that he knows that he knows it, and so on, as long as we care to pose the question: there is, we recognize, an infinity here, but it is not an infinite regress in the bad sense, for it is the questions that peter out, as being pointless, rather than the answers.

                   —J. R. L
UCAS
, O
XFORD PHILOSOPHER, IN HIS 1961 ESSAY
“M
INDS
, M
ACHINES, AND
G
ÖDEL”
9

 

Dreams are real while they last; can we say more of life?

                   —H
AVELOCK
E
LLIS

 

Will future machines be capable of having emotional and spiritual experiences? We have discussed several scenarios for a nonbiological intelligence to display the full range of emotionally rich behavior exhibited by biological humans today. By the late 2020s we will have completed the reverse engineering of the human brain, which will enable us to create nonbiological systems that match and exceed the complexity and subtlety of humans, including our emotional intelligence.

A second scenario is that we could upload the patterns of an actual human into a suitable nonbiological, thinking substrate. A third, and the most compelling, scenario involves the gradual but inexorable progression of humans themselves from biological to nonbiological. That has already started with the benign introduction of devices such as neural implants to ameliorate disabilities and disease. It will progress with the introduction of nanobots in the bloodstream, which will be developed initially for medical and antiaging applications. Later more sophisticated nanobots will interface with our biological neurons to augment our senses, provide virtual and augmented reality from within the nervous system, assist our memories, and provide other routine cognitive tasks. We will then be cyborgs, and from that foothold in our brains, the nonbiological portion of our intelligence will expand its powers exponentially. As I discussed in
chapters 2
and
3
we see ongoing exponential growth of every aspect of information technology, including price-performance, capacity, and rate of adoption. Given that the mass and energy required to compute and communicate each bit of information are extremely small (see
chapter 3
), these trends can continue until our nonbiological intelligence vastly exceeds that of the biological portion. Since our biological intelligence is essentially fixed in its capacity (except for some relatively modest optimization from biotechnology), the nonbiological portion will ultimately predominate. In the 2040s, when the nonbiological portion will be billions of times more capable, will we still link our consciousness to the biological portion of our intelligence?

Clearly, nonbiological entities will claim to have emotional and spiritual experiences, just as we do today. They—we—will claim to be human and to have the full range of emotional and spiritual experiences that humans claim to have. And these will not be idle claims; they will evidence the sort of rich, complex, and subtle behavior associated with such feelings.

But how will these claims and behaviors—compelling as they will be—relate to the subjective experience of nonbiological humans? We keep coming back to the very real but ultimately unmeasurable (by fully objective means) issue of consciousness. People often talk about consciousness as if it were a clear property of an entity that can readily be identified, detected, and gauged.
If there is one crucial insight that we can make regarding why the issue of consciousness is so contentious, it is the following:

There exists no objective test that can conclusively determine its presence
.

Science is about objective measurements and their logical implications, but the very nature of objectivity is that you cannot measure subjective experience—you can only measure correlates of it, such as behavior (and by behavior, I include internal behavior—that is, the actions of the components of an entity, such as neurons and their many parts). This limitation has to do with the very nature of the concepts of “objectivity” and “subjectivity.” Fundamentally we cannot penetrate the subjective experience of another entity with direct objective measurement. We can certainly make arguments about it, such as, “Look inside the brain of this nonbiological entity; see how its methods are just like those of a human brain.” Or, “See how its behavior is just like human behavior.” But in the end, these remain just arguments. No matter how convincing the behavior of a nonbiological person, some observers will refuse to accept the consciousness of such an entity unless it squirts neurotransmitters, is based on DNA-guided protein synthesis, or has some other specific biologically human attribute.

We assume that other humans are conscious, but even that is an assumption. There is no consensus among humans about the consciousness of nonhuman entities, such as higher animals. Consider the debates regarding animal rights, which have everything to do with whether animals are conscious or just quasi machines that operate by “instinct.” The issue will be even more contentious with regard to future nonbiological entities that exhibit behavior and intelligence even more humanlike than those of animals.

In fact these future machines will be even more humanlike than humans today. If that seems like a paradoxical statement, consider that much of human thought today is petty and derivative. We marvel at Einstein’s ability to conjure up the theory of general relativity from a thought experiment or Beethoven’s ability to imagine symphonies that he could never hear. But these instances of human thought at its best are rare and fleeting. (Fortunately we have a record of these fleeting moments, reflecting a key capability that has separated humans from other animals.) Our future primarily nonbiological selves will be vastly more intelligent and so will exhibit these finer qualities of human thought to a far greater degree.

So how will we come to terms with the consciousness that will be claimed by nonbiological intelligence? From a practical perspective such claims will be accepted. For one thing, “they” will be us, so there won’t be any clear distinctions between biological and nonbiological intelligence. Furthermore, these nonbiological entities will be extremely intelligent, so they’ll be able to convince
other humans (biological, nonbiological, or somewhere in between) that they are conscious. They’ll have all the delicate emotional cues that convince us today that humans are conscious. They will be able to make other humans laugh and cry. And they’ll get mad if others don’t accept their claims. But this is fundamentally a political and psychological prediction, not a philosophical argument.

I do take issue with those who maintain that subjective experience either doesn’t exist or is an inessential quality that can safely be ignored. The issue of who or what is conscious and the nature of the subjective experiences of others are fundamental to our concepts of ethics, morality, and law. Our legal system is based largely on the concept of consciousness, with particularly serious attention paid to actions that cause suffering—an especially acute form of conscious experience—to a (conscious) human or that end the conscious experience of a human (for example, murder).

Human ambivalence regarding the ability of animals to suffer is reflected in legislation as well. We have laws against animal cruelty, with greater emphasis given to more intelligent animals, such as primates (although we appear to have a blind spot with regard to the massive animal suffering involved in factory farming, but that’s the subject of another treatise).

My point is that we cannot safely dismiss the question of consciousness as merely a polite philosophical concern. It is at the core of society’s legal and moral foundation. The debate will change when a machine—nonbiological intelligence—can persuasively argue on its own that it/he/she has feelings that need to be respected. Once it can do so with a sense of humor—which is particularly important for convincing others of one’s humanness—it is likely that the debate will be won.

I expect that actual change in our legal system will come initially from litigation rather than legislation, as litigation often precipitates such transformations. In a precursor of what is to come, attorney Martine Rothblatt, a partner in Mahon, Patusky, Rothblatt & Fisher, filed a mock motion on September 16, 2003, to prevent a corporation from disconnecting a conscious computer. The motion was argued in a mock trial in the biocyberethics session at the International Bar Association conference.
10

We can measure certain correlates of subjective experience (for example, certain patterns of objectively measurable neurological activity with objectively verifiable reports of certain subjective experiences, such as hearing a sound). But we cannot penetrate to the core of subjective experience through objective measurement. As I mentioned in
chapter 1
, we are dealing with the difference between third-person “objective” experience, which is the basis of science, and first-person “subjective” experience, which is a synonym for consciousness.

Consider that we are unable to truly experience the subjective experiences of others. The experience-beaming technology of 2029 will enable the brain of one person to experience only the
sensory
experiences (and potentially some of the neurological correlates of emotions and other aspects of experience) of another person. But that will still not convey the same
internal
experience as that undergone by the person beaming the experience, because his or her brain is different. Every day we hear reports about the experiences of others, and we may even feel empathy in response to the behavior that results from their internal states. But because we’re exposed to only the
behavior
of others, we can only
imagine
their subjective experiences. Because it is possible to construct a perfectly consistent, scientific worldview that omits the existence of consciousness, some observers come to the conclusion that it’s just an illusion.

Jaron Lanier, the virtual-reality pioneer, takes issue (in the third of his six objections to what he calls “cybernetic totalism” in his treatise “One Half a Manifesto”) with those who maintain “that subjective experience either doesn’t exist, or is unimportant because it is some sort of ambient or peripheral effect.”
11
As I pointed out, there is no device or system we can postulate that could definitively detect subjectivity (conscious experience) associated with an entity. Any such purported device would have philosophical assumptions built into it. Although I disagree with much of Lanier’s treatise (see the “Criticism from Software” section in
chapter 9
), I concur with him on this issue and can even imagine (and empathize with!) his feelings of frustration at the dictums of “cybernetic totalists” such as myself (not that I accept this characterization).
12
Like Lanier I even accept the subjective experience of those who maintain that there is no such thing as subjective experience.

Precisely because we cannot resolve issues of consciousness entirely through objective measurement and analysis (science), a critical role exists for philosophy. Consciousness is the most important ontological question. After all, if we truly imagine a world in which there is no subjective experience (a world in which there is swirling stuff but no conscious entity to experience it), that world may as well not exist. In some philosophical traditions, both Eastern (certain schools of Buddhist thought, for example), and Western (specifically, observer-based interpretations of quantum mechanics), that is exactly how such a world is regarded.

R
AY
:
We can debate what sorts of entities are or can be conscious. We can argue about whether consciousness is an emergent property or caused by some specific mechanism, biological or otherwise. But there’s another mystery associated with consciousness, perhaps the most important one
.

M
OLLY
2004:
Okay, I’m all ears
.

Other books

One Paris Summer (Blink) by Denise Grover Swank
Curiosity by Joan Thomas
Califia's Daughters by Leigh Richards
There Will Be Wolves by Karleen Bradford
Deadly Spin by Wendell Potter
Truth by Julia Karr
A Darker Shade of Sweden by John-Henri Holmberg
Damage Control by Michael Bowen