Take cellular telephones, for instance. Nowadays, in order to be competitive, cell phones are marketed not so much (maybe even very little) on the basis of their original purpose as communication devices, but instead for the number of tunes they can hold, the number of games you can play on them, the quality of the photos they can take, and who knows what else! Cell phones once were, but no longer are, dedicated machines. And why is that? It is because their inner circuitry has surpassed a certain threshold of complexity, and that fact allows them to have a chameleon-like nature. You can use the hardware inside a cell phone to house a word processor, a Web browser, a gaggle of video games, and on and on. This, in essence, is what the computer revolution is all about: when a certain well-defined threshold — I’ll call it the “Gödel–Turing threshold” — is surpassed, then a computer can emulate
any
kind of machine.
This is the meaning of the term “universal machine”, introduced in 1936 by the English mathematician and computer pioneer Alan Turing, and today we are intimately familiar with the basic idea, although most people don’t know the technical term or concept. We routinely download virtual machines from the Web that can convert our universal laptops into temporarily specialized devices for watching movies, listening to music, playing games, making cheap international phone calls, who knows what. Machines of all sorts come to us through wires or even through the air, via software, via patterns, and they swarm into and inhabit our computational hardware. One single universal machine morphs into new functionalities at the drop of a hat, or, more precisely, at the double-click of a mouse. I bounce back and forth between my email program, my word processor, my Web browser, my photo displayer, and a dozen other “applications” that all live inside my computer. At any specific moment, most of these independent, dedicated machines are dormant, sleeping, waiting patiently (actually, unconsciously) to be awakened by my royal double-click and to jump obediently to life and do my bidding.
Inspired by Gödel’s mapping of
PM
into itself, Alan Turing realized that the critical threshold for this kind of computational universality comes at exactly that point where a machine is flexible enough to read and correctly interpret a set of data that describe its own structure. At this crucial juncture, a machine can, in principle, explicitly watch how it does any particular task, step by step. Turing realized that a machine that has this critical level of flexibility can imitate any another machine, no matter how complex the latter is. In other words, there is nothing
more
flexible than a universal machine. Universality is as far as you can go!
This is why my Macintosh can, if I happen to have fed it the proper software, act indistinguishably from my son’s more expensive and faster “Alienware” computer (running any specific program), and vice versa. The only difference is one of speed, because my Mac will always remain, deep in its guts, a Mac. It will therefore have to imitate the fast, alien hardware by constantly consulting tables of data that explicitly describe the hardware of the Alien, and doing all those lookups is very slow. This is like me trying to get you to sign my signature by writing out a long set of instructions telling you how to draw every tiny curve. In principle it’s possible, but it would be hugely slower than just signing with my own handware!
The Unexpectedness of Universality
There is a tight analogy linking universal machines of this sort with the universality I earlier spoke of (though I didn’t use that word) when I described the power of
Principia Mathematica.
What Bertrand Russell and Alfred North Whitehead did not suspect, but what Kurt Gödel realized, is that, simply by virtue of representing certain fundamental features of the positive integers (such basic facts as commutativity, distributivity, the law of mathematical induction), they had unwittingly made their formal system
PM
surpass a key threshold that made it “universal”, which is to say, capable of defining number-theoretical functions that imitate arbitrarily complex
other
patterns (or indeed, even capable of turning around and imitating itself — giving rise to Gödel’s black-belt maneuver).
Russell and Whitehead did not realize what they had wrought because it didn’t occur to them to use
PM
to “simulate” anything else. That idea was not on their radar screen (for that matter, radar itself wasn’t on anybody’s radar screen back then). Prime numbers, squares, sums of two squares, sums of two primes, Fibonacci numbers, and so forth were seen merely as beautiful mathematical patterns — and patterns consisting of numbers, though fabulously intricate and endlessly fascinating, were not thought of as being isomorphic to anything else, let alone as being stand-ins for, and thus standing for, anything else. After Gödel and Turing, though, such naïveté went down the drain in a flash.
By and large, the engineers who designed the earliest electronic computers were as unaware as Russell and Whitehead had been of the richness that they were unwittingly bringing into being. They thought they were building machines of very limited, and purely military, scopes — for instance, machines to calculate the trajectories of ballistic missiles, taking wind and air resistance into account, or machines to break very specific types of enemy codes. They envisioned their computers as being specialized, single-purpose machines — a little like wind-up music boxes that could play just one tune each.
But at some point, when Alan Turing’s abstract theory of computation, based in large part on Gödel’s 1931 paper, collided with the concrete engineering realities, some of the more perceptive people (Turing himself and John von Neumann especially) put two and two together and realized that their machines, incorporating the richness of integer arithmetic that Gödel had shown was so potent, were thereby universal. All at once, these machines were like music boxes that could read arbitrary paper scrolls with holes in them, and thus could play
any
tune. From then on, it was simply a matter of time until cell phones started being able to don many personas other than just the plain old cell-phone persona. All they had to do was surpass that threshold of complexity and memory size that limited them to a single “tune”, and then they could become anything.
The early computer engineers thought of their computers as number-crunching devices and did not see numbers as a universal medium. Today we (and by “we” I mean our culture as a whole, rather than specialists) do not see numbers that way either, but our lack of understanding is for an entirely different reason — in fact, for exactly the opposite reason. Today it is because all those numbers are so neatly hidden behind the screens of our laptops and desktops that we utterly forget they are there. We watch virtual football games unfolding on our screen between “dream teams” that exist only inside the central processing unit (which is carrying out arithmetical instructions, just as it was designed to do). Children build virtual towns inhabited by little people who virtually ride by on virtual bicycles, with leaves that virtually fall from trees and smoke that virtually dissipates into the virtual air. Cosmologists create virtual galaxies, let them loose, and watch what happens as they virtually collide. Biologists create virtual proteins and watch them fold up according to the complex virtual chemistry of their constituent virtual submolecules.
I could list hundreds of things that take place on computer screens, but few people ever think about the fact that all of this is happening courtesy of
addition and multiplication of integers
way down at the hardware level. But that
is
exactly what’s happening. We don’t call computers
computers
for nothing, after all! They are, in fact, computing sums and products of integers expressed in binary notation. And in that sense, Gödel’s world-dazzling, Russell-crushing, Hilbert-toppling vision of 1931 has become such a commonplace in our downloading, upgrading, gigabyte culture that although we are all swimming in it all the time, hardly anyone is in the least aware of it. Just about the only trace of the original insight that remains visible, or rather, “audible”, around us is the very word “computer”. That term tips you off, if you bother to think about it, to the fact that underneath all the colorful pictures, seductive games, and lightning-fast Web searches, there is nothing going on but integer arithmetic. What a hilarious joke!
Actually, it’s more ambiguous than that, and for all the same reasons as I elaborated in Chapter 11. Wherever there is a pattern, it can be seen either as itself or as standing for anything to which it is isomorphic. Words that apply to Pomponnette’s straying also apply, as it happens, to Aurélie’s straying, and neither interpretation is truer than the other, even if one of them was the originally intended one. Likewise, an operation on an integer that is written out in binary notation (for instance, the conversion of “0000000011001111” into “1100111100000000”) that one person might describe as multiplication by 256 might be described by another observer as a left-shift by eight bits, and by another observer as the transfer of a color from one pixel to its neighbor, and by someone else as the deletion of an alphanumeric character in a file. As long as each one is a correct description of what’s happening, none of them is privileged. The reason we call computers “computers”, then, is historic. They originated as integer-calculation machines, and they are still of course validly describable as such — but we now realize, as Kurt Gödel first did back in 1931, that such devices can be equally validly perceived and talked about in terms that are fantastically different from what their originators intended.
Universal Beings
We human beings, too, are universal machines of a different sort: our neural hardware can copy arbitrary patterns, even if evolution never had any grand plan for this kind of “representational universality” to come about. Through our senses and then our symbols, we can internalize external phenomena of many sorts. For example, as we watch ripples spreading on a pond, our symbols echo their circular shapes, abstract them, and can replay the essence of those shapes much later. I say “the essence” because some — in fact most — detail is lost; as you know very well, we retain not all levels of what we encounter but only those that our hardware, through the pressures of natural selection, came to consider the most important. I also have to make clear (although I hope no reader would fall into such a trap) that when I say that our symbols “internalize” or “copy” external patterns, I don’t mean that when we watch ripples on a pond, or when we “replay” a memory of such a scene (or of many such scenes blurred together), there literally are circular patterns spreading out on some horizontal surface inside our brains. I mean that a host of structures are jointly activated that are connected with the concepts of water, wetness, ponds, horizontal surfaces, circularity, expansion, things bobbing up and down, and so forth. I am not talking about a movie screen inside the head!
Representational universality also means that we can import ideas and happenings without having to be direct witnesses to them. For example, as I mentioned in Chapter 11, humans (but not most other animals) can easily process the two-dimensional arrays of pixels on a television screen and can see those ever-changing arrays as coding for distant or fictitious three-dimensional situations evolving over time.
On a skiing vacation in the Sierra Nevada, far away from home, my children and I took advantage of the “doggie cam” at the Bloomington kennel where we had boarded our golden retriever Ollie, and thanks to the World Wide Web, we were treated to a jerky sequence of stills of a couple of dozen dogs meandering haphazardly in a fenced-in play area outdoors, looking a bit like particles undergoing random Brownian motion, and although each pooch was rendered by a pretty small array of pixels, we could often recognize our Ollie by subtle features such as the angle of his tail. For some reason, the kids and I found this act of visual eavesdropping on Ollie quite hilarious, and although we could easily describe this droll scene to our human friends, and although I would bet a considerable sum that these few lines of text have vividly evoked in your mind both the canine scene at the kennel and the human scene at the ski resort, we all realized that there was not a hope in hell that we could ever explain to Ollie himself that we had been “spying” on him from thousands of miles away. Ollie would never know, and could never know.
Why not? Because Ollie is a dog, and dogs’ brains are not universal. They cannot absorb ideas like “jerky still photo”, “24-hour webcam”, “spying on dogs playing in the kennel”, or even, for that matter, “2,000 miles away”. This is a huge and fundamental breach between humans and dogs — indeed, between humans and all other species. It is this that sets us apart, makes us unique, and, in the end, gives us what we call “souls”.
In the world of living things, the magic threshold of representational universality is crossed whenever a system’s repertoire of symbols becomes extensible without any obvious limit. This threshold was crossed on the species level somewhere along the way from earlier primates to ourselves. Systems above this counterpart to the Gödel–Turing threshold — let’s call them “beings”, for short — have the capacity to model inside themselves other beings that they run into — to slap together quick-and-dirty models of beings that they encounter only briefly, to refine such coarse models over time, even to invent imaginary beings from whole cloth. (Beings with a propensity to invent other beings are often informally called “novelists”.)
Once beyond the magic threshold, universal beings seem inevitably to become ravenously thirsty for tastes of the interiority of other universal beings. This is why we have movies, soap operas, television news, blogs, webcams, gossip columnists,
People
magazine, and
The Weekly World News,
among others. People yearn to get inside other people’s heads, to “see out” from inside other crania, to gobble up other people’s experiences.