Read The Singularity Is Near: When Humans Transcend Biology Online
Authors: Ray Kurzweil
Tags: #Non-Fiction, #Fringe Science, #Retail, #Technology, #Amazon.com
If we want to re-create a brain that understands Chinese using people as little cogs in the re-creation, we would really need billions of people simulating the processes in a human brain (essentially the people would be simulating a computer, which would be simulating human brain methods). This would require a rather large room, indeed. And even if extremely efficiently organized, this system would run many thousands of times slower than the Chinese-speaking brain it is attempting to re-create.
Now, it’s true that none of these billions of people would need to know anything about Chinese, and none of them would necessarily know what is going on in this elaborate system. But that’s equally true of the neural connections in a real human brain. None of the hundred trillion connections in my brain knows anything about this book I am writing, nor do any of them know English, nor any of the other things that I know. None of them is conscious of this chapter, nor of any of the things I am conscious of. Probably none of them is conscious at all. But the entire system of them—that is, Ray Kurzweil—is conscious. At least I’m claiming that I’m conscious (and so far, these claims have not been challenged).
So if we scale up Searle’s Chinese Room to be the rather massive “room” it needs to be, who’s to say that the entire system of billions of people simulating a brain that knows Chinese isn’t conscious? Certainly it would be correct to say that such a system knows Chinese. And we can’t say that it is not conscious any more than we can say that about any other brain process. We can’t know the subjective experience of another entity (and in at least some of Searle’s other writings, he appears to acknowledge this limitation). And this massive multibillion-person “room” is an entity. And perhaps it is conscious. Searle is just declaring ipso facto that it isn’t conscious and that this conclusion is obvious. It may seem that way when you call it a room and talk about a limited number of people manipulating a small number of slips of paper. But as I said, such a system doesn’t remotely work.
Another key to the philosophical confusion implicit in the Chinese Room argument is specifically related to the complexity and scale of the system. Searle says that whereas he cannot prove that his typewriter or tape recorder is not conscious, he feels it is obvious that they are not. Why is this so obvious? At least one reason is because a typewriter and a tape recorder are relatively simple entities.
But the existence or absence of consciousness is not so obvious in a system that is as complex as the human brain—indeed, one that may be a direct copy of the organization and “causal powers” of a real human brain. If such a “system” acts human and knows Chinese in a human way, is it conscious? Now the answer is no longer so obvious. What Searle is saying in the Chinese Room argument is that we take a simple “machine” and then consider how absurd it is to consider such a simple machine to be conscious. The fallacy has everything to do with the scale and complexity of the system. Complexity alone does not necessarily give us consciousness, but the Chinese Room tells us nothing about whether or not such a system is conscious.
Kurzweil’s Chinese Room
. I have my own conception of the Chinese Room—call it Ray Kurzweil’s Chinese Room.
In my thought experiment there is a human in a room. The room has decorations from the Ming dynasty, including a pedestal on which sits a mechanical typewriter. The typewriter has been modified so that its keys are marked with Chinese symbols instead of English letters. And the mechanical linkages have been cleverly altered so that when the human types in a question in Chinese, the typewriter does not type the question but instead types the answer to the question. Now, the person receives questions in Chinese characters and dutifully presses the appropriate keys on the typewriter. The typewriter types out not the question, but the appropriate answer. The human then passes the answer outside the room.
So here we have a room with a human in it who appears from the outside to know Chinese yet clearly does not. And clearly the typewriter does not know Chinese, either. It is just an ordinary typewriter with its mechanical linkages modified. So despite the fact that the man in the room can answer questions in Chinese, who or what can we say truly knows Chinese? The decorations?
Now, you might have some objections to my Chinese Room.
You might point out that the decorations don’t seem to have any significance
.
Yes, that’s true. Neither does the pedestal. The same can be said for the human and for the room.
You might also point out that the premise is absurd. Just changing the mechanical linkages in a mechanical typewriter could not possibly enable it to convincingly answer questions in Chinese
(not to mention the fact that we can’t fit the thousands of Chinese-character symbols on the keys of a typewriter).
Yes, that’s a valid objection, as well. The only difference between my Chinese Room conception and the several proposed by Searle is that it is patently obvious in my conception that it couldn’t possibly work and is by its very nature absurd. That may not be quite as apparent to many readers or listeners with regard to the Searle Chinese Rooms. However, it is equally the case.
And yet we can make my conception work, just as we can make Searle’s conceptions work. All you have to do is to make the typewriter linkages as complex as a human brain. And that’s theoretically (if not practically) possible. But the phrase “typewriter linkages” does not suggest such vast complexity. The same is true of Searle’s description of a person manipulating slips of paper or following a book of rules or a computer program. These are all equally misleading conceptions.
Searle writes: “Actual human brains cause consciousness by a series of specific neurobiological processes in the brain.” However, he has yet to provide any basis for such a startling view. To illuminate Searle’s perspective, I quote from a letter he sent me:
It may turn out that rather simple organisms like termites or snails are conscious. . . . The essential thing is to recognize that consciousness is a biological process like digestion, lactation, photosynthesis, or mitosis, and you should look for its specific biology as you look for the specific biology of these other processes
.
38
I replied:
Yes, it is true that consciousness emerges from the biological process(es) of the brain and body, but there is at least one difference. If I ask the question, “does a particular entity emit carbon dioxide,” I can answer that question
through clear objective measurement. If I ask the question, “is this entity conscious,” I may be able to provide inferential arguments—possibly strong and convincing ones—but not clear objective measurement
.
With regard to the snail, I wrote:
Now when you say that a snail may be conscious, I think what you are saying is the following: that we may discover a certain neurophysiological basis for consciousness (call it “x”) in humans such that when this basis was present humans were conscious, and when it was not present humans were not conscious. So we would presumably have an objectively measurable basis for consciousness. And then if we found that in a snail, we could conclude that it was conscious. But this inferential conclusion is just a strong suggestion, it is not a proof of subjective experience on the snail’s part. It may be that humans are conscious because they have “x” as well as some other quality that essentially all humans share, call this “y.” The “y” may have to do with a human’s level of complexity or something having to do with the way we are organized, or with the quantum properties of our microtubules (although this may be part of “x”), or something else entirely. The snail has “x” but doesn’t have “y” and so it may not be conscious
.
How would one settle such an argument? You obviously can’t ask the snail. Even if we could imagine a way to pose the question, and it answered yes, that still wouldn’t prove that it was conscious. You can’t tell from its fairly simple and more-or-less predictable behavior. Pointing out that it has “x” may be a good argument, and many people may be convinced by it. But it’s just an argument—not a direct measurement of the snail’s subjective experience. Once again, objective measurement is incompatible with the very concept of subjective experience.
Many such arguments are taking place today—though not so much about snails as about higher-level animals. It is apparent to me that dogs and cats are conscious (and Searle has said that he acknowledges this as well). But not all humans accept this. I can imagine scientific ways of strengthening the argument by pointing out many similarities between these animals and humans, but again these are just arguments, not scientific proof.
Searle expects to find some clear biological “cause” of consciousness, and he seems unable to acknowledge that either understanding or consciousness may emerge from an overall pattern of activity. Other philosophers, such as Daniel Dennett, have articulated such “pattern emergent” theories of consciousness. But whether it is “caused” by a specific biological process or by a pattern of
activity, Searle provides no foundation for how we would measure or detect consciousness. Finding a neurological correlate of consciousness in humans does not prove that consciousness is necessarily present in other entities with the same correlate, nor does it prove that the absence of such a correlate indicates the absence of consciousness. Such inferential arguments necessarily stop short of direct measurement. In this way, consciousness differs from objectively measurable processes such as lactation and photosynthesis.
As I discussed in
chapter 4
, we have discovered a biological feature unique to humans and a few other primates: the spindle cells. And these cells with their deep branching structures do appear to be heavily involved with our conscious responses, especially emotional ones. Is the spindle cell structure the neurophysiological basis “x” for human consciousness? What sort of experiment could possibly prove that? Cats and dogs don’t have spindle cells. Does that prove that they have no conscious experience?
Searle writes: “It is out of the question, for purely neurobiological reasons, to suppose that the chair or the computer is conscious.” I agree that chairs don’t seem to be conscious, but as for computers of the future that have the same complexity, depth, subtlety, and capabilities as humans, I don’t think we can rule out this possibility. Searle just assumes that they are not, and that it is “out of the question” to suppose otherwise. There is really nothing more of a substantive nature to Searle’s “arguments” than this tautology.
Now, part of the appeal of Searle’s stance against the possibility of a computer’s being conscious is that the computers we know today just don’t seem to be conscious. Their behavior is brittle and formulaic, even if they are occasionally unpredictable. But as I pointed out above, computers today are on the order of one million times simpler than the human brain, which is at least one reason they don’t share all of the endearing qualities of human thought. But that disparity is rapidly shrinking and will ultimately reverse itself in a couple of decades. The early twenty-first-century machines I am talking about in this book will appear and act very differently than the relatively simple computers of today.
Searle articulates the view that nonbiological entities are capable of only manipulating logical symbols and he appears to be unaware of other paradigms. It is true that manipulating symbols is largely how rule-based expert systems and game-playing programs work. But the current trend is in a different direction, toward self-organizing chaotic systems that employ biologically inspired methods, including processes derived directly from the reverse engineering of the hundreds of neuron clusters we call the human brain.
Searle acknowledges that biological neurons are machines—indeed, that
the entire brain is a machine. As I discussed in
chapter 4
, we have already recreated in an extremely detailed way the “causal powers” of individual neurons as well as those of substantial neuron clusters. There is no conceptual barrier to scaling these efforts up to the entire human brain.
The Criticism from the Rich-Poor Divide
Another concern expressed by Jaron Lanier and others is the “terrifying” possibility that through these technologies the rich may gain certain advantages and opportunities to which the rest of humankind does not have access.
39
Such inequality, of course, would be nothing new, but with regard to this issue the law of accelerating returns has an important and beneficial impact. Because of the ongoing exponential growth of price-performance, all of these technologies quickly become so inexpensive as to become almost free.
Look at the extraordinary amount of high-quality information available at no cost on the Web today that did not exist at all just a few years ago. And if one wants to point out that only a fraction of the world today has Web access, keep in mind that the explosion of the Web is still in its infancy, and access is growing exponentially. Even in the poorest countries of Africa, Web access is expanding rapidly.
Each example of information technology starts out with early-adoption versions that do not work very well and that are unaffordable except by the elite. Subsequently the technology works a bit better and becomes merely expensive. Then it works quite well and becomes inexpensive. Finally it works extremely well and is almost free. The cell phone, for example, is somewhere between these last two stages. Consider that a decade ago if a character in a movie took out a portable telephone, this was an indication that this person must be very wealthy, powerful, or both. Yet there are societies around the world in which the majority of the population were farming with their hands two decades ago and now have thriving information-based economies with widespread use of cell phones (for example, Asian societies, including rural areas of China). This lag from very expensive early adopters to very inexpensive, ubiquitous adoption now takes about a decade. But in keeping with the doubling of the paradigm-shift rate each decade, this lag will be only five years a decade from now. In twenty years, the lag will be only two to three years (see
chapter 2
).