The Supposed Selves of Robot Vehicles
I was most impressed when I read about “Stanley”, a robot vehicle developed at the Stanford Artificial Intelligence Laboratory that not too long ago drove all by itself across the Nevada desert, relying just on its laser rangefinders, its television camera, and GPS navigation. I could not help asking myself, “How much of an ‘I’ does Stanley have?”
In an interview shortly after the triumphant desert crossing, one gungho industrialist, the director of research and development at Intel (you should keep in mind that Intel manufactured the computer hardware on board Stanley), bluntly proclaimed: “Deep Blue [IBM’s chess machine that defeated world champion Garry Kasparov in 1997] was just processing power. It didn’t think. Stanley thinks.”
Well, with all due respect for the remarkable collective accomplishment that Stanley represents, I can only comment that this remark constitutes shameless, unadulterated, and naïve hype. I see things very differently. If and when Stanley ever acquires the ability to form limitlessly snowballing categories such as those in the list that opened this chapter,
then
I’ll be happy to say that Stanley thinks. At the present, though, its ability to cross a desert without self-destructing strikes me as comparable to an ant’s following a dense pheromone trail across a vacant lot without perishing. Such autonomy on the part of a robot vehicle is hardly to be sneezed at, but it’s a far cry from thinking and a far cry from having an “I”.
At one point, Stanley’s video camera picked up another robot vehicle ahead of it (this was H1, a rival vehicle from Carnegie-Mellon University) and eventually Stanley pulled around H1 and left it in its dust. (By the way, I am carefully avoiding the pronoun “he” in this text, although it was par for the course in journalistic references to Stanley, and perhaps also at the AI Lab as well, given that the vehicle had been given a human name. Unfortunately, such linguistic sloppiness serves as the opening slide down a slippery slope, soon winding up in full anthropomorphism.) One can see this event taking place on the videotape made by that camera, and it is the climax of the whole story. At this crucial moment, did Stanley recognize the other vehicle as being “like me”? Did Stanley think, as it gaily whipped by H1, “There but for the grace of God go I?” or perhaps “Aha, gotcha!” Come to think of it, why did I write that Stanley “gaily whipped by” H1?
What would it take for a robot vehicle to think such thoughts or have such feelings? Would it suffice for Stanley’s rigidly mounted TV camera to be able to turn around on itself and for Stanley thereby to acquire visual imagery of itself? Of course not. That may be one indispensable move in the long process of acquiring an “I”, but as we know in the case of chickens and cockroaches, perception of a body part does not a self make.
A Counterfactual Stanley
What is lacking in Stanley that would endow it with an “I”, and what does not seem to be part of the research program for developers of self-driving vehicles, is a deep understanding of its place in the world. By this I do not mean, of course, the vehicle’s location on the earth’s surface, which is given to it down to the centimeter by GPS; it means a rich representation of the vehicle’s own actions and its relations to other vehicles, a rich representation of its goals and its “hopes”. This would require the vehicle to have a full episodic memory of thousands of experiences it had had, as well as an episodic projectory (what it would expect to happen in its “life”, and what it would hope, and what it would fear), as well as an episodic subjunctory, detailing its thoughts about near misses it had had, and what would most likely have happened had things gone some other way.
Thus, Stanley the Robot Steamer would have to be able to think to itself such hypothetical future thoughts as, “Gee, I wonder if H1 will deliberately swerve out in front of me and prevent me from passing it, or even knock me off the road into the ditch down there! That’s what
I’d
do if
I
were H1!” Then, moments later, it would have to be able to entertain counterfactual thoughts such as, “Whew! Am I ever glad that H1 wasn’t so clever as I feared — or maybe H1 is just not as competitive as I am!”
An article in
Wired
magazine described the near-panic in the Stanford development team as the desert challenge was drawing perilously near and they realized something was still very much lacking. It casually stated, “They needed the algorithmic equivalent of self-awareness”, and it then proceeded to say that soon they had indeed achieved this goal (it took them all of three months of work!). Once again, when all due hat-tips have been made toward the team’s great achievement, one still has to realize that there is nothing going on inside Stanley that merits being labeled by the highly loaded, highly anthropomorphic term “self-awareness”.
The feedback loop inside Stanley’s computational machinery is good enough to guide it down a long dusty road punctuated by potholes and lined with scraggly saguaros and tumbleweed plants. I salute it! But if one has set one’s sights not just on driving but on thinking and consciousness, then Stanley’s feedback loop is not strange enough — not anywhere close. Humanity still has a long ways to go before it will collectively have wrought an artificial “I”.
CHAPTER 14
Strangeness in the “I” of the Beholder
The Inert Sponges inside our Heads
W
HY, you might be wondering, do I call the lifelong loop of a human being’s self-representation, as described in the preceding chapter, a
strange
loop? You make decisions, take actions, affect the world, receive feedback, incorporate it into your self, then the updated “you” makes more decisions, and so forth, round and round. It’s a loop, no doubt — but where’s the paradoxical quality that I’ve been saying is a
sine qua non
for strange loopiness? Why is this not just an ordinary feedback loop? What does such a loop have in common with the quintessential strange loop that Kurt Gödel discovered unexpectedly lurking inside
Principia Mathematica
?
For starters, a brain would seem,
a priori,
just about as unlikely a substrate for self-reference and its rich and counterintuitive consequences as was the extremely austere treatise
Principia Mathematica,
from which self-reference had been strictly banished. A human brain is just a big spongy bulb of inanimate molecules tightly wedged inside a rock-hard cranium, and there it simply sits, as inert as a lump on a log. Why should self-reference and a self be lurking in such a peculiar medium any more than they lurk in a lump of granite? Where’s the “I”-ness in a brain?
Just as something very strange had to be happening inside the stony fortress of
Principia Mathematica
to allow the outlawed “I” of Gödelian sentences like “I am not provable” to creep in, something very strange must also take place inside a bony cranium stuffed with inanimate molecules if it is to bring about a soul, a “light on”, a unique human identity, an “I”. And keep in mind that an “I” does not magically pop up in all brains inside
all
crania, courtesy of “the right stuff” (that is, certain “special” kinds of molecules); it happens only if the proper
patterns
come to be in that medium. Without such patterns, the system is just as it superficially appears to be: a mere lump of spongy matter, soulless, “I”-less, devoid of any inner light.
Squirting Chemicals
When the first brains came into existence, they were trivial feedback devices, less sophisticated than a toilet’s float-ball mechanism or the thermostat on your wall, and like those devices, they selectively made primitive organisms move towards certain things (food) and away from others (dangers). Evolutionary pressures, however, gradually made brains’ triage of their environments grow more complex and multi-layered, and eventually (here we’re talking millions or billions of years), the repertoire of categories that were being responded to grew so rich that the system, like a TV camera on a sufficiently long leash, was capable of “pointing back”, to some extent, at itself. That first tiny glimmer of self was the germ of consciousness and “I”-ness, but there is still a great mystery.
No matter how complicated and sophisticated brains became, they always remained, at bottom, nothing but a set of cells that “squirted chemicals” back and forth among each other (to borrow a phrase from the pioneering roboticist and provocative writer Hans Moravec), a bit like a huge oil refinery in which liquids are endlessly pumped around from one tank to another. How could a system of pumping liquids ever house a locus of upside-down causality, where
meanings
seem to matter infinitely more than physical objects and their motions? How could joy, sadness, a love for impressionist painting, and an impish sense of humor inhabit such a cold, inanimate system? One might as well look for an “I” inside a stone fortress, a toilet’s tank, a roll of toilet paper, a television, a thermostat, a heat-seeking missile, a heap of beer cans, or an oil refinery.
Some philosophers see our inner lights, our “I” ’s, our humanity, our souls, as emanating from the nature of the substrate itself — that is, from the organic chemistry of carbon. I find that a most peculiar tree on which to hang the bauble of consciousness. Basically, this is a mystical refrain that explains nothing. Why should the chemistry of carbon have some magical property entirely unlike than that of any other substance? And what
is
that magical property? And how does it make us into conscious beings? Why is it that only
brains
are conscious, and not kneecaps or kidneys, if all it takes is organic chemistry? Why aren’t our carbon-based cousins the mosquitoes just as conscious as we are? Why aren’t cows just as conscious as we are? Doesn’t organization or pattern play any role here? Surely it does. And if it does, why couldn’t it play the
whole
role?
By focusing on the medium rather than the message, the pottery rather than the pattern, the typeface rather than the tale, philosophers who claim that something ineffable about carbon’s chemistry is indispensable for consciousness miss the boat. As Daniel Dennett once wittily remarked in a rejoinder to John Searle’s tiresome “right-stuff” refrain, “It ain’t the meat, it’s the motion.” (This was a somewhat subtle hat-tip to the title of a somewhat unsubtle, clearly erotic song written in 1951 by Lois Mann and Henry Glover, made famous many years later by singer Maria Muldaur.) And for my money, the magic that happens in the meat of brains makes sense only if you know how to look at the motions that inhabit them.
The Stately Dance of the Symbols
Brains take on a radically different cast if, instead of focusing on their squirting chemicals, you make a level-shift upwards, leaving that low level far behind. To allow us to speak easily of such upward jumps was the reason I dreamt up the allegory of the careenium, and so let me once again remind you of its key imagery. By zooming out from the level of crazily careening simms and by looking instead at the system on a speeded-up time scale whereby the simms’ locally chaotic churning becomes merely a foggy blur, one starts to see other entities coming into focus, entities that formerly were utterly invisible. And at that level,
mirabile dictu,
meaning emerges.