What Mad Pursuit (22 page)

Read What Mad Pursuit Online

Authors: Francis Crick

BOOK: What Mad Pursuit
11.15Mb size Format: txt, pdf, ePub

The visual system has been evolved to detect those many aspects of the real world that, in evolution, have been important for survival, such as the recognition of food, predators, and possible mates. It is especially interested in moving objects. Evolution will latch onto any features that will give useful information. In many cases the brain has to perform its operations as quickly as possible. The neurons themselves are inherently rather slow (compared to transistors in a digital computer) and so the brain has to be organized to carry out many of its “computations” as quickly as possible. Exactly how it does this we do not yet understand.

It is very easy to convince someone that however he may think his brain works, it certainly doesn’t work like that. That misunderstanding can be demonstrated from the effects of human brain damage, or by psychophysical experiments on undamaged humans, or by outlining what we know about monkey brains. What seems a uniform and simple process is in fact the result of elaborate interactions between systems, subsystems, and sub-subsystems. For example, one system determines how we see color, another how we see in three dimensions, (although we receive only two-dimensional information from each of our two eyes), and so on. One of the subsystems of the latter depends on the
difference
between the images in our two eyes; this is called stereopsis. Another deals with perspective. Another uses the fact that objects at a distance subtend a smaller angle than when they are nearer to us. Others deal with occlusion (one object occluding part of an object behind it), shape-from-shading, and so on. Each of these subsystems may well need sub-subsystems to make it work.

Normally all the systems produce roughly the same answer, but by using tricks, such as constructing rather artificial visual scenes, we can pit them against one another and so produce a visual illusion. If a person looks, with one eye through a small hole, into a room built with false perspectives, an object on one side of the room can be made to appear smaller than the same object on the other side. Such a full-scale room, called an Ames room, exists at the Exploratorium in San Francisco. When I was looking into it some children appeared to be running from side to side. They appeared to grow taller as they ran to one side and to get shorter again as they ran back to the other side. Of course, I know full well that children never change height in this way, but the illusion was nevertheless completely compelling.

The conception of the visual system as a bag of tricks has been put forward by Rama Ramachandran, mainly as a result of his elegant and ingenious psychophysical studies. He calls his point of view the utilitarian theory of perception, writing:

It may not be too farfetched to suggest that the visual system uses a bewildering array of special-purpose tailor-made tricks and rules-of-thumb to solve its problems. If this pessimistic view of perception is correct, then the task of vision researchers ought to be to uncover these rules rather than to attribute to the system a degree of sophistication that it simply doesn’t possess. Seeking overarching principles may be an exercise in futility.

This approach is at least compatible with what we know of the organization of the cortex in monkeys and with François Jacob’s idea that evolution is a tinkerer. It is, of course, possible that underlying all the various tricks there are just a few basic learning algorithms that, building on the crude structures produced by genetics, produce this complicated variety of mechanisms.

Another thing I discovered was that although much is known about the behavior of neurons in many parts of the visual system (at least in monkeys), nobody really has any clear idea how we actually see anything at all. This unhappy state of affairs is usually never mentioned to students of the subject. Neurophysiologists have some glimpses into how the brain takes the picture apart, how somewhat separate areas of our cerebral cortex process motion, color, shape, position in space, and so on. What is not yet understood is how the brain puts all this together to give us our vivid unitary picture of the world.

I also discovered that there was another aspect of the subject one was not supposed to mention. This was consciousness. Indeed an interest in the topic was usually taken as a sign of approaching senility. This taboo surprised me very much. Of course, I knew that until recently most of the experiments on the visual system of animals were done when the animals were unconscious under an anesthetic so that, strictly speaking, they could not see anything at all. For many years this did not unduly disturb the experimentalists, since they found that the neurons in the brain, even under these restrictive conditions, behaved in such interesting ways. Recently more work has been done on alert animals. Although these animals are technically rather more difficult to study, there are compensations, since the animals are returned to their cages after a normal day’s work and the experimenter can go home to supper. Such animals are usually studied for many months before being sacrificed. (Experiments on anesthetized animals can be much more demanding since they usually last for many many hours at a stretch, after which the animal is sacrificed straight away.) Curiously enough, hardly any experiments have yet been done on the
same
sort of neurons, in the
same
animal, first when it is alert and then when it is under an anesthetic.

It was not only neurophysiologists who disliked talking about consciousness. The same was true of psychophysicists and cognitive scientists. A year or so ago the psychologist George Mandler did organize a course of seminars at the psychology department at UCSD. The seminars showed that there was hardly any consensus as to what the problem was, let alone how to solve it. Most of the speakers seemed to think that no solution was possible in the near future and merely talked around the subject. Only David Zipser (another ex-molecular biologist, now at UCSD) thought as I did, namely that consciousness was likely to involve a special neural mechanism of some sort, probably distributed over the hippocampus and over many areas of the cortex, and that it was not impossible to discover by experiment at least the general nature of the mechanism.

Curiously enough, in biology it is sometimes those basic problems that look impossibly difficult to solve which yield the most easily. This is because there may be so few even remotely possible solutions that eventually one is led inexorably to the correct answer. (An example of such a problem is discussed toward the end of
chapter 3
.) The biological problems that are really difficult to unscramble are those where there is almost an infinity of plausible answers and one has painstakingly to attempt to distinguish between them.

One main handicap to the experimental study of consciousness is that while people can tell us what they are conscious of (whether they have suddenly lost their color vision, for example, and now only see everything in shades of gray), it is more difficult to obtain this information from monkeys. True, monkeys
can
be laboriously trained to press one key if they see a vertical line and another if they are shown a horizontal one. But we can ask people to
imagine
color, or to imagine they are waggling their fingers. It is difficult to instruct monkeys to do this. And yet we can look inside a monkey’s head in much more detail than we can look inside a person’s head. It is therefore not unimportant to have some
theory
of consciousness, however tentative, to guide experiments on both humans and monkeys. I suspect that consciousness may be able to do without a fully working long-term memory system but that very short-term memory is indispensable to it. This suggests straight away that one should look into the molecular and cellular basis of very short-term memory—a rather neglected subject—and this can be done on animals, even on a cheap and relatively simple animal like a mouse.

And what of theory? It is easy to see that theory of some sort is essential, since any explanation of the brain is going to involve large numbers of neurons interacting in complicated ways. Moreover, the system is highly nonlinear, and it is not easy to guess exactly how any complex model will behave.

I soon found that much theoretical work was going on. It tended to fall into a number of somewhat separate schools, each of which was rather reluctant to quote the work of the others. This is usually characteristic of a subject that is not producing any definite conclusions. (Philosophy and theology might be good examples.) I renewed acquaintance with the theorist David Marr (whom I had originally met in Cambridge) when he came with another theorist, Tomaso (Tommy) Poggio, to the Salk for a month in April 1979 to talk about the visual system. Alas, David is now dead, at the early age of thirty-five, but Tommy (now at M.I.T.) is still alive and well, and has become a close friend. Eventually I met many of the theorists working on the brain (too numerous to list here), mainly by going to meetings. Some I got to know better from personal visits.

Much of this theoretical work was on neural nets—that is, on models in which groups of units (somewhat like neurons) interact in complicated ways to perform some function connected, often rather remotely, with some aspect of psychology. Much work was being done on how such nets could be made to learn, using simple rules—algorithms—devised by the theorists.

A recent two-volume book, entitled
Parallel Distributed Processing
(PDP), describes much of the work done by one school of theorists, the San Diego group and their friends. It is edited by David Rumelhart (now at Stanford) and Jay McClelland (now at Carnegie-Mellon) and published by Bradford Books. For such a large, rather academic book it has proved to be a best-seller. So striking are the results that the PDP approach is having a dramatic impact both on psychologists and on workers in artificial intelligence (AI), especially those trying to produce a new generation of highly parallel computers. It seems likely to become the new wave in psychology.

There is no doubt that very suggestive results have been produced. For example, we can see how a neural net can store a “memory” of various firing patterns of its “neurons” and how any small part of one of the patterns (the cue) can recall the entire pattern. Also how such a system can be taught by experience to learn tacit rules (just as a child first learns the rules of English grammar tacitly, without being able to state them explicitly). One example of such a net, called NetTalk, set up by Terry Sejnowski and Charles Rosenberg, gives rather a striking demonstration of how this little machine can, by experience, learn to pronounce correctly a written English text, even one it has never seen before. Terry, whom I got to know well, gave a striking demonstration of it one day at a Salk Faculty lunch. (He has also talked about it on the
Today
show.) This simple model doesn’t
understand
what it is reading. Its pronunciation is never completely correct, partly because, in English, pronunciation sometimes depends on meaning.

In spite of this I have some strong reservations about the work done so far. In the first place, the “units” used almost always have some properties that appear unrealistic. For example, a single unit can produce excitation at some of its terminals and inhibition at others. Our present knowledge of the brain, admittedly limited, suggests that this seldom if ever happens, at least in the neocortex. It is thus impossible to test all such theories at the neurobiological level since at the very first and most obvious test they fail completely. To this the theorists usually reply that they could easily alter their models to make that aspect of them more realistic, but in practice they never bother to do this. One feels that they don’t really want to know whether their model is right or not. Moreover, the most powerful algorithm now being used [the so-called back-propagation algorithm] also looks highly unlikely in neurobiological terms. All attempts to overcome this particular difficulty appear to me to be very forced. Reluctantly I must conclude that these models are not really theories but rather are “demonstrations.” They are existence proofs that units somewhat like neurons can indeed do surprising things, but there is hardly anything to suggest that the brain actually performs in exactly the way they suggest.

Of course, it is quite possible that these nets and their algorithms could be used in the design of a new generation of highly parallel computers. The main technical problem here seems to be to find some neat way to embody
modifiable
connections in silicon chips, but this problem will probably be solved before long.

There are two other criticisms of many of these neural net models. The first is that they don’t act fast enough. Speed is a crucial requirement for animals like ourselves. Most theorists have yet to give speed the weight it deserves. The second concerns relationships. An example might help here. Imagine that two letters—
any
two letters—are briefly flashed on a screen, one above the other. The task is to say which one is the upper one. (This problem has been suggested independently by the psychologists Stuart Sutherland and Jerry Fodor.) This is easily done by older models, using the processes commonly employed in modern digital computers, but attempts to do it with parallel distributed processing appear to me to be very cumbersome. I suspect that what is missing may be a mechanism of
attention.
Attention is likely to be a
serial
process working on top of the highly parallel PDP processes.

Part of the trouble with theoretical neuroscience is that it lies somewhat between three other fields. At one extreme we have those researchers working directly on the brain. This is science. It is attempting to discover what devices nature actually uses. At the other extreme lies artificial intelligence. This is engineering. Its object is to
produce a device
that works in the desired way. The third field is mathematics. Mathematics cares neither for science nor for engineering (except as a source of problems) but only about the relationship between abstract entities.

Other books

Making the Cut by Anne Malcom
Self Destruct by K. D. Carrillo
Heart of Rock by Karyn Gerrard
Confessions Of An Old Lady by Christina Morgan
Rachel by Jill Smith
Goddess in the Middle by Stephanie Julian
Come, Barbarians by Todd Babiak
Heroes Adrift by Moira J. Moore
Carousel by Barbara Baldwin