Read Gödel, Escher, Bach: An Eternal Golden Braid Online
Authors: Douglas R. Hofstadter
Tags: #Computers, #Art, #Classical, #Symmetry, #Bach; Johann Sebastian, #Individual Artists, #Science, #Science & Technology, #Philosophy, #General, #Metamathematics, #Intelligence (AI) & Semantics, #G'odel; Kurt, #Music, #Logic, #Biography & Autobiography, #Mathematics, #Genres & Styles, #Artificial Intelligence, #Escher; M. C
This makes sense to our intuition on some level, but it does not make much sense logically. For we would then be compelled to look for an
explanation of the mechanism which does the perceiving of all the active symbols, if it is not covered by what we have described so far. Of course, a "soulist" would not have to look any further-he would merely assert that the perceiver of all this neural action is the soul, which cannot be described in physical terms, and that is that. However, we shall try to give a "nonsoulist" explanation of where consciousness arises.
Our alternative to the soulist explanation-and a disconcerting one it is, too- is to stop at ohe symbol level and say, "This is it-this is what consciousness is. Consciousness is that property of a system that arises whenever there exist symbols in the system which obey triggering patterns somewhat like the ones described in the past several sections."
Put so starkly, this may seem inadequate. How does it account for the sense of "I", the sense of' self?
Subsystems
There is no reason to expect that "I", or "the self"', should not be represented by a symbol. In fact, the symbol for the self is probably the most complex of all the symbols in the brain. For this reason, I choose to put it on a new level of the hierarchy and call it a subsystem, rather than a symbol. To be precise, by "subsystem", I mean a constellation of symbols, each of which can be separately activated under the control of the subsystem itself. The image I wish to convey of a subsystem is that it functions almost as an independent "subbrain", equipped with its own repertoire of symbols which can trigger each other internally. Of course, there is also much communication between the subsystem and the "outside" world-that is, the rest of the brain. "Subsystem" is just another name for an overgrown symbol, one which has gotten so complicated that it has many subsymbols which interact among themselves. Thus, there is no strict level distinction between symbols and subsystems.
Because of the extensive links between a subsystem and the rest of the brain (some of which will be described shortly), it would be very difficult to draw a sharp boundary between the subsystem and the outside; but even if the border is fuzzy, the subsystem is quite a real thing. The interesting thing about a subsystem is that, once activated and left to its own devices, it can work on its own. Thus, two or more subsystems of the brain of an individual may operate simultaneously. I have noticed this happening on occasion in my own brain: sometimes I become aware that two different melodies are running through my mind, competing for "my" attention. Somehow, each melody is being manufactured, or "played", in a separate compartment of my brain. Each of the systems responsible for drawing a melody out of my brain is presumably activating a number of symbols, one after another, completely oblivious to the other system doing the same thing. Then they both attempt to communicate with a third subsystem of my brain-mv self'-symbol- and it is at that point that the "1" inside my brain gets wind of what's going on: in other words, it starts picking up a chunked description of the activities of those two subsystems.
Subsystems and Shared Code
Typical subsystems might be those that represent the people we know intimately. They are represented in such a complex way in our brains that their symbols enlarge to the rank of subsystem, becoming able to act autonomously, making use of some resources in our brains for support. By this, I mean that a subsystem symbolizing a friend can activate many of the symbols in my brain just as I can. For instance, I can fire up my subsystem for a good friend and virtually feel myself in his shoes, running through thoughts which he might have, activating symbols in sequences which reflect his thinking patterns more accurately than my own. It could be said that my model of this friend, as embodied in a subsystem of my brain, constitutes my own chunked description of his brain.
Does this subsystem include, then, a symbol for every symbol which I think is in his brain? That would be redundant. Probably the subsystem makes extensive use of symbols already present in my brain. For instance, the symbol for "mountain" in my brain can be borrowed by the subsystem, when it is activated. The way in which that symbol is then used by the subsystem will not necessarily be identical to the way it is used by my full brain. In particular, if I am talking with my friend about the Tien Shan mountain range in Central Asia (neither of us having been there), and I know that a number of years ago he had a wonderful hiking experience in the Alps, then my interpretation of his remarks will be colored in part by my imported images of his earlier Alpine experience, since I will be trying to imagine how he visualizes the area.
In the vocabulary we have been building up in this Chapter, we could say that the activation of" the "mountain" symbol in me is under control of my subsystem representing him. The effect of this is to open up a different window onto to my memories from the one which I normally use-namely, my "default option" switches from the full range of my memories to the set of my memories of his memories. Needless to say, my representations of his memories are only approximations to his actual memories, which are complex modes of activation of the symbols in his brain, inaccessible to me.
My representations of his memories are also complex modes of activation of my own symbols-those for "primordial" concepts, such as grass, trees, snow, sky, clouds, and so on. These are concepts which I must assume are represented in him "identically" to the way they are in me. I must also assume a similar representation in him of even more primordial notions: the experiences of gravity, breathing, fatigue, color, and so forth.
Less primordial but perhaps a nearly universal human quality is the enjoyment of reaching a summit and seeing a view. Therefore, the intricate processes in my brain which are responsible for this enjoyment can be taken over directly by the friend-subsystem without much loss of fidelity.
We could go on to attempt to describe how I understand an entire tale told by my friend, a tale filled with many complexities of human relationships and mental experiences. But our terminology would quickly become inadequate. There would be tricky recursions connected with representa
tions in him of representations in me of representations in him of one thing and another.
If' mutual friends figured in the tale being told, I would unconsciously look for compromises between my image of his representations of them, and my
own
images of them. Pure recursion would simply be an inappropriate formalism for dealing with symbol amalgams of this type. And I have barely scratched the surface!
We plainly lack the vocabulary today for describing the complex interactions that are possible between symbols. So let us stop before we get bogged down.
We should note, however, that computer systems are beginning to run into some of the some kinds of complexity, and therefore some of these notions have been given names. For instance, my "mountain" symbol is analogous to what in computer jargon is called shared (or reentrant) codecode which can be used by two or more separate timesharing programs running on a single computer. The fact that activation of one symbol can have different results when it is part of different subsystems can be explained by saying that its code is being processed by different interpreters. Thus, the triggering patterns in the "mountain" symbol are not absolute; they are relative to the system within which the symbol is activated.
The reality of such "subbrains" may seem doubtful to some. Perhaps the following quote from M. C. Escher, as he discusses how he creates his periodic plane-filling drawings, will help to make clear what kind of phenomenon I am referring to: While drawing I sometimes feel as if' I were a spiritualist medium, controlled by the creatures which I am conjuring up. It is as if they themselves decide on the shape in which they choose to appear. They take little account of my critical opinion during their birth and I cannot exert much influence on the measure of their development. They are usually very difficult and obstinate creatures
Here is a perfect example of the near-autonomy of certain subsystems of the brain, once they are activated. Escher's subsystems seemed to him almost to be able to override his esthetic judgment. Of course, this opinion must be taken with a grain of salt, since those powerful subsystems came into being as a result of his many years of training and submission to precisely the forces that molded his esthetic sensitivities. In short, it is wrong to divorce the subsystems in Escher's brain from Escher himself or from his esthetic judgment. They constitute a vital part of his esthetic sense, where "he" is the complete being of the artist.
The Self-Symbol and Consciousness
A very important side effect of the self-subsystem is that it can play the role of "soul", in the following sense: in communicating constantly with the rest of the subsystems and symbols in the brain, it keeps track of what symbols are active, and in what way. This means that it has to have symbols for mental activity-in other words, symbols for symbols, and symbols for the
actions of symbols.
Of course, this does not elevate consciousness or awareness to any "magical", nonphysical level. Awareness here is a direct effect of the complex hardware and software we have described. Still, despite its earthly origin, this way of describing awareness-as the monitoring of brain activity by a subsystem of the brain itself-seems to resemble the nearly indescribable sensation which we all know and call "consciousness".
Certainly one can see that the complexity here is enough that many unexpected effects could be created. For instance, it is quite plausible that a computer program with this kind of structure would make statements about itself which would have a great deal of resemblance to statements which people commonly make about themselves. This includes insisting that it has free will, that it is not explicable as a "sum of its parts", and so on. (On this subject, see the article "Matter, Mind, and Models" by M. Minsky in his book Semantic Information Processing.)
What kind of guarantee is there that a subsystem, such as I have here postulated, which represents the self, actually exists in our brains? Could a whole complex network of symbols such as has been described above evolve without a self-symbol evolving, How could these symbols and their activities play out "isomorphic" mental events to real events in the surrounding universe, if there were no symbol for the host organism, All the stimuli coming into the system are centered on one small mass in space. It would be quite a glaring hole in a brain's symbolic structure not to have a symbol for the physical object in which it is housed, and which plays a larger role in the events it mirrors than any other object. In fact, upon reflection, it seems that the only way one could make sense of the world surrounding a localized animate object is to understand the role of that object in relation to the other objects around it. This necessitates the existence of a selfsymbol; and the step from symbol to subsystem is merely a reflection of the importance of the selfsymbol', and is not a qualitative change.
Our First Encounter with Lucas
The Oxford philosopher J. R. Lucas (not connected with the Lucas numbers described earlier) wrote a remarkable article in 1961, entitled "Minds, Machines, and Gödel". His views are quite opposite to mine, and yet he manages to mix many of the same ingredients together in coming up with his opinions. The following excerpt is quite relevant to what we have just been discussing:
At one's first and simplest attempts to philosophize, one becomes entangled in questions of whether when one knows something one knows that one knows it, and what, when one is thinking of oneself, is being thought about, and what is doing the thinking. After one has been puzzled and bruised by this problem for a long time, one learns not to press these questions: the concept of a conscious being is, implicitly, realized to be different from that of an unconscious object. In saying that a conscious being knows something, we are saying not onh that he knows it, but that he knows that he knows it, and that he knows that he knows that he knows it, and so on, as long as we care to pose the question: there is, we recognize, an infinity here, but it is not an infinite regress in the had sense, for it is the questions that peter out, as being pointless, rather than the answers. The questions are felt to be pointless because the concept contains within itself the idea of being able to go on answering such questions indefinitely. Although conscious beings have the power of going on, we do not wish to exhibit this simply as a succession of tasks they are able to perform, nor do we see the mind as an infinite sequence of selves and super-selves and super-super-selves. Rather, we insist that a conscious being is a unity, and though we talk about parts of the mind, we (to so only as a metaphor, and will not allow it to be taken literally.
The paradoxes of consciousness arise because a conscious being can be aware of itself as well as of other things, and yet cannot really be construed as being divisible into parts. It means that a conscious being can deal with Gödelian questions in a was in which a machine cannot, because a conscious being can both consider itself and its perform a rice and vet not be other than that which did the performance. A machine can be made in a manner of speaking to "consider" its performance, but it cannot take this "into account"