TRUE NAMES (14 page)

Read TRUE NAMES Online

Authors: Vernor Vinge

BOOK: TRUE NAMES
8.8Mb size Format: txt, pdf, ePub

People frequently tell me that they’re absolutely certain that no computer could ever be sentient, conscious, self-willed, or in any other way “aware” of itself. They’re often shocked when I ask back what makes them sure that that they, themselves, possess these admirable qualities. The reply is that, if they’re sure of anything at all, it is that “
I’m aware - hence I’m aware.
” Yet, what do such convictions really mean? Since “self-awareness” ought to be an awareness of what’s going on within one’s mind, no realist could maintain for long that people really have much insight, in the literal sense of seeing-in.

Isn’t it remarkable how certainly we feel that we’re self-aware—that we have such broad abilities to know what’s happening inside ourselves? The evidence for that is weak, indeed. It is true that some people seem to have special excellences, which we sometimes call “insights”, for assessing the attitudes and motivations of other people. And certain individuals even sometimes make good evaluations of themselves. But that doesn’t justify our using names like
insight
or
self-awareness
for such abilities. Why not simply call them “person-sights” or “person-awarenesses?” Is there really good reason to suppose that skills like these are very different from the ways we learn the other kinds of things we learn? Instead of seeing them as “seeing-in,” we could regard them as quite the opposite: just one more way of “figuring-out.” Perhaps we learn about ourselves the same ways that we learn about un-self-ish things.

The fact is that the parts of ourselves which we call “self aware” comprise only a small portion of our mind. They work by building simulated worlds of their own—worlds which are greatly simplified, as compared with either the real world outside, or with the immense computer systems inside the brain: systems which no one can pretend, today, to understand. And our worlds of simulated awareness are worlds of simple magic, wherein each and every imagined object is invested with meanings and purposes. Consider how one can scarcely but see a hammer except as something to hammer with, or see a ball except as something to throw and catch. Why are we so constrained to perceive things, not as they are, but as they can be used?
Because the highest levels of our mind are goal-directed problem-solvers.
That is to say that the machines inside our heads evolved, originally, to meet various built-in or acquired needs such as comfort, nutrition, defense and reproduction. Later, in the last few million years, we evolved even more powerful sub-machines which, in ways we don’t yet understand, seem to correlate and analyze to discover which kinds of actions cause which sorts of effects; in a word, to discover what we call knowledge. And though we often like to think that knowledge is abstract, and that our search for it is pure and good in itself–still, we ultimately use it for its ability to tell us what to do in order to gain whichever ends we seek (even when we conclude that in order to do that, we may first need to gain yet more and more knowledge).

Thus because, as we say, “knowledge is power”, our knowledge itself becomes enmeshed in those webs of ways we reach our goals. And that’s the key: it isn’t any use for us to know, unless that knowledge helps to tell us what to do. This is so wrought into the conscious mind’s machinery that it seems foolishness to say it:
No knowledge is of any use unless we have a use for it
.

Now we come to the point of consciousness: that word refers to some parts of the mind most specialized for knowing how to use other systems. But these so-called ‘conscious’ ways to think do not know much about how those other systems actually work. Sometimes, of course, it pays to know such things: if you know how something works then you’ll be better at repairing it when it breaks; furthermore, the better you understand a mechanism, the easier to find new ways to adapt it to other purposes.

Thus, a person who sustains an injured leg may begin, for the first time, consciously to make theories about how walking works: “
To turn to the left, I’ll have to push myself that way
”—and then one can start to figure out,
with what could I push–against what
? Similarly, when we’re forced to face an unusually hard problem, we sometimes become more reflective, and try to understand something of how the rest of the mind ordinarily solves problems; at such times one finds oneself saying such things as,
“Now I must get organized. Why can’t I concentrate on the important questions and not get distracted by those other inessential details?”

Paradoxically, it is often at those very moments—the times when our minds come closer than usual to comprehending how they themselves work, and we perhaps succeed in engaging what little knowledge we do have about our own mechanisms, so that we can alter or repair them—paradoxically, these are often just the times when, consciously, we think our mental processes are not working so well and, as we say, we feel “confused”. Nonetheless, even these more “conscious” attempts at self-inspection still remain mostly confined to the pragmatic, magic world of symbol-signs. No human being seems ever to have succeeded in using self-analysis to discover much about how the programs underneath might really work.

I say again that we ‘drive’ ourselves–our cars, our bodies and our minds–in very much the self-same ways. The players of our computer-game machines control and guide what happens in their great machines: by using symbols, spells and images–as well as secret, private names. And we, ourselves—that is, the parts of us that we call “conscious”—do very much the same: in effect, we sit in front of mental computer-terminals, attempting to steer and guide the great unknown engines of the mind, not by understanding how those engines work, but just by selecting simple names from menu-lists of symbols which appear, from time to time, upon our mental screen-displays.

But really, when one thinks of it, it scarcely could be otherwise! Consider what would happen if our minds indeed could really see inside themselves. What could possibly be worse than to be presented with a clear view of the trillion-wire networks of our nerve-cell connections? Our scientists have peered at those structures for years with powerful microscopes, yet failed to come up with comprehensive theories of what those networks do and how.

What about the claims of mystical thinkers that there are other, better ways to see the mind. One way they recommend is learning how to train the conscious mind to stop its usual sorts of thoughts and then attempt (by holding still) to see and hear the fine details of mental life. Would that be any different–or any better–than seeing them through instruments? Perhaps—except that it doesn’t face the fundamental problem of how to understand a complicated thing! For, if we suspend our usual ways of thinking, we’ll be bereft of all the parts of mind already trained to interpret complicated phenomena. Anyway, even if one could observe and detect the signals that emerge from other, normally inaccessible portions of the mind, these would probably make no sense to the systems involved with consciousness. To see why not, let’s return once more to understanding such simple things as how we walk.

Suppose that, when you walk about, you were indeed able to see and hear the signals in your spinal chord and lower brain. Would you be able to make any sense of them? Perhaps, but not easily. Indeed, it is easy to do such experiments, using simple biofeedback devices to make those signals audible and visible; the result is that one may indeed more quickly learn to perform a new skill, such as better using an injured limb. However, just as before, this does not appear to work through gaining a ‘conscious’ understanding of how those circuits work; instead the experience is very much like business as usual; we gain control by acquiring just one more form of semi-conscious symbol-magic. Presumably what happens is that a new control system is assembled somewhere in the nervous system, and interfaced with superficial signals we can know about. However, biofeedback does not appear to provide any different insights into how learning works than do our ordinary, built-in senses.

In any case, our locomotion-scientists have been tapping such signals for decades, using electronic instruments. Using that data, they, they have been able to develop various partial theories about the kinds of interactions and regulation-systems which are involved. However, these theories have not emerged from relaxed meditation about, or passive observation of those complicated biological signals; what little we have learned has come from deliberate and intense exploitation of the accumulated discoveries of three centuries of our scientists’ and mathematicians’ study of analytical mechanics and a century of newer theories about servo-control engineering. It is generally true in science that mere observational “insights” rarely leads to new understandings. One must first have some glimmerings of the form of some new theory, or of a novel method for describing processes: one needs a “new idea”. Some other avenue must supply new magic tokens for us to use to represent the “causes” and the “purposes” of those phenomena.

Then from where do we get the new ideas we need? For any single individual, of course, most concepts come from the societies and cultures that one grows up in. As for the rest of our ideas, the ones we “get” all by ourselves, these, too, come from societies—but, now, the ones inside our individual minds. For, a human mind is not in any real sense a single entity, nor does a brain have a single, central way to work. Brain do not secrete thought the way livers secrete bile; a brain consists of a huge assembly different sorts of sub-machines parts which each do different kinds of jobs—each useful to some other parts. For example, we use distinct sections of the brain for hearing the sounds of words, as opposed to recognizing other kinds of natural sounds or musical pitches. There is even solid evidence that there is a special part of the brain which is specialized for seeing and recognizing faces, as opposed to visual perception of other, ordinary things. I suspect that there is, inside the cranium, perhaps as many as a hundred different kinds of computers, each with a somewhat different basic architecture; these have been accumulating over the past four hundred million years of our evolution. They are wired together into a great multi-resource network of specialists, in which each section knows how to call on certain other sections to get things done which serve their purposes. And each sub-system uses different styles of programming and different forms of representations; there is no standard language- code.

Accordingly, if one part of that Society of Mind were to inquire about another part, the two would most likely turn out to use substantially different languages and architectures. In such a case, if A were to ask B a question about how B works then how could B understand that question, and how could A understand the answer? Communication is often difficult enough between two different human tongues. But the signals used by the different portions of the human mind are even less likely to be even remotely as similar as two human dialects with sometimes-corresponding roots. More likely, they are simply too different to communicate at all—except through symbols which initiate their use.

Now, one might ask,
“Then, how do people doing different jobs communicate, when they have different backgrounds, thoughts, and purposes?”
The answer is that this problem is easier, because a person knows so much more than do the smaller fragment of that person’s mind. And, besides, we all are raised in similar ways, and this provides a solid base of common knowledge. But, even so, we overestimate how well we actually communicate.

The many jobs that people do may seem different on the surface, but they are all very much the same, to the extent that they all have a common base in what we like to call “common sense”—that is, the knowledge shared by all of us. This means that we do not really need to tell each other as much as we suppose. Often, when we “explain” something, we scarcely explain anything new at all; instead, we merely show some examples of what we mean, and some non-examples; these indicate to the listener how to link up various structures already known. In short, we often just tell “which” instead of “what”.

Consider how poorly people can communicate about so many seemingly simple things. We can’t say how we balance on a bicycle, or how we tell a shadow from a real thing, or, even how one fetches a fact from one’s memory. Again, one might complain, “
It isn’t fair to complain about our inability to express things about things like seeing or balancing or remembering. Those are things we learned before we even learned to speak!
” But, though that criticism is fair in some respects, it also illustrates how hard communication must be for all the sub-parts of the mind that never learned to talk at all—and these are most of what we are. The idea of “meaning” itself is really a matter of size and scale: it only makes sense to ask what something means in a system which is large enough to have many meanings. In very small systems, the idea of something having a meaning becomes as vacuous as saying that a brick is a very small house.

Now it is easy enough to say that the mind is a society, but that idea by itself is useless unless we can say more about how it is organized. If all those specialized parts were equally competitive, there would be only anarchy, and the more we learned, the less we’d be able to do. So there must be some kind of administration, perhaps organized roughly in hierarchies, like the divisions and subdivisions of an industry or of a human political society. What would those levels do? In all the large societies we know which work efficiently, the lower levels exercise the more specialized working skills, while the higher levels are concerned with longer-range plans and goals. And this is another fundamental reason why it is so hard to translate between our conscious and unconscious thoughts!

Why is it so hard to translate between conscious and unconscious thoughts? Because their languages are so different. The kinds of terms and symbols we use on the conscious level are primarily for expressing our choices among and between the things we know how to do; this is how we form and express our various kinds of plans and goals. However, those resources we choose to use involve other sorts of mechanisms and processes, about which ‘we’ know very much less. So when our conscious probes try to descend into the myriads of smaller and smaller sub-machines which make the mind, they encounter alien representations, used for increasingly specialized purposes; that is, systems that use smaller and smaller inner “languages.”

Other books

The Broken Ones by Stephen M. Irwin
An Antarctic Mystery by Jules Verne
Four Play by Maya Banks, Shayla Black
Species by Yvonne Navarro
Penny and Peter by Carolyn Haywood
Lost London by Richard Guard
A Season of Gifts by Richard Peck
Dodger of the Dials by James Benmore