I Can’t Tell You How Indescribably Nondescript It Was!
It happens that there are a few common words and phrases in English that have a similarly flavored self-undermining quality. Take the adjective “nondescript”, for instance. If I say, “Their house is so nondescript”, you will certainly get some sort of visual image from my phrase — even though (or rather, precisely because) my adjective suggests that
no
description quite fits it. It’s even weirder to say “The truck’s tires were indescribably huge” or “I just can’t tell you how much I appreciate your kindness.” The self-undermining quality is oddly crucial to the communication.
There is also a kind of “junior version” of Berry’s paradox that was invented a few decades after it, and which runs like this. Some integers are interesting. 0 is interesting because 0 times any number gives 0. 1 is interesting because 1 times any number leaves that number unchanged. 2 is interesting because it is the smallest even number, and 3 is interesting because it is the number of sides of the simplest two-dimensional polygon (a triangle). 4 is interesting because it is the first composite number. 5 is interesting because (among many other things) it is the number of regular polyhedra in three dimensions. 6 is interesting because it is three factorial (3×2×1) and also the triangular number of three (3+2+1). I could go on with this enumeration, but you get the point. The question is, when do we run into the first
un
interesting number? Perhaps it is 62? Or 1729? Well, no matter what it is, that is certainly an interesting property for a number to have! So 62 (or whatever your candidate number might have been) turns out to be interesting, after all — interesting because it is uninteresting. And thus the idea of “the smallest uninteresting integer” backfires on itself in a manner clearly echoing the backfiring of Berry’s definition of
b.
This is the kind of twisting-back of language that turned Bertrand Russell’s sensitive stomach, as we well know, and yet, to his credit, it was none other than B. Russell who first publicized G. G. Berry’s paradoxical number
b.
In his article about it in 1906, Gödel’s birthyear (four syllables!), Russell did his best to deflect the paradox’s sting by claiming that it was an illusion arising from a naïve misuse of the word “describable” in the context of mathematics. That notion, claimed Russell, had to be parceled out into an infinite hierarchy of different
types
of describability — descriptions at level 0, which could refer only to notions of pure arithmetic; descriptions at level 1, which could use arithmetic but could also refer to descriptions at level 0; descriptions at level 2, which could refer to arithmetic and also to descriptions at levels 0 and 1; and so forth and so on. And so the idea of “describability” without restriction to some specific hierarchical level was a chimera, declared Russell, believing he had discovered a profound new truth. And with this brand-new type of theory (the brand-new theory of types), he claimed to have immunized the precious, delicate world of rigorous reasoning against the ugly, stomach-turning plague of Berry-Berry.
Blurriness Buries Berry
While I agree with Russell that something fishy is going on in Berry’s paradox, I don’t agree about what it is. The weakness that I focus in on is the fact that English is a hopelessly imprecise medium for expressing mathematical statements; its words and phrases are far too vague. What may seem precise at first turns out to be fraught with ambiguity. For example, the expression “nine cubed plus forty-eight, all times ten cubed plus one”, which earlier I exhibited as a description of 777,777, is in fact ambiguous — it might, for instance, be interpreted as meaning 777 times 1000, with 1 tacked on at the end, resulting in 777,001.
But that little ambiguity is just the tip of the iceberg. The truth of the matter is that it is far from clear what kinds of English expressions count as descriptions of a number. Consider the following phrases, which purport to be descriptions of specific integers:
• the number of distinct languages ever spoken on earth
• the number of heavenly bodies in the Solar System
• the number of distinct four-by-four magic squares
• the number of interesting integers less than 100
What is wrong with them? Well, they all involve ill-defined notions.
What, for instance, is meant by a “language”? Is sign language a language? Is it “spoken”? Is there a sharp cutoff between languages and dialects? How many “distinct languages” lay along the pathway from Latin to Italian? How many “distinct languages” were spoken en route from Neanderthal days to Latin? Is Church Latin a language? And Pig Latin? Even if we had videotapes of every last human utterance on earth for the past million years, the idea of objectively assigning each one to some particular “official” language, then cleanly teasing apart all the “truly distinct” languages, and finally counting them would
still
be a nonsensical pipe dream. It’s already meaningless enough to talk about counting all the “items” in a garbage can, let alone all the languages of all time!
Moving on, what counts as a “heavenly body”? Do we count artificial satellites? And random pieces of flotsam and jetsam left floating out there by astronauts? Do we count every single asteroid? Every single distinct stone floating in Saturn’s rings? What about specks of dust? What about isolated atoms floating in the void? Where does the Solar System stop? And so on,
ad infinitum.
You might object, “But those aren’t mathematical notions! Berry’s idea was to use
mathematical
definitions of integers.” All right, but then show me a sharp cutoff line between mathematics and the rest of the world. Berry’s definition uses the vague notion of “syllable counting”, for instance. How many syllables are there in “finally” or “family” or “rhythm” or “lyre” or “hour” or “owl”? But no matter; suppose we had established a rigorous and objective way of counting syllables. Still, what would count as a “mathematical concept”? Is the discipline of mathematics really that sharply defined? For instance, what is the precise definition of the notion “magic square”? Different authors define this notion differently. Do we have to take a poll of the mathematical community? And if so, who then counts as a member of that blurry community?
What about the blurry notion of “interesting numbers”? Could we give some kind of mathematical precision to that? As you saw above, reasons for calling a number “interesting” could involve geometry and other areas of mathematics — but once again, where do the borders of mathematics lie? Is game theory part of mathematics? What about medical statistics? What about the theory of twisting tendrils of plants? And on and on.
To sum up, the notion of an “English-language definition of an integer” turns out to be a hopeless morass, and so Berry’s twisty notion of
b,
no less than Escher’s twisty notion of two mutually drawing hands, is an ingenious figment of the imagination rather than a genuine strange loop. There goes a promising candidate for strange loopiness down the drain!
Although in this brief digression I’ve made it sound as if the idea Berry had in 1904 was naïve, I must point out that some six decades later, the young mathematician Greg Chaitin, inspired by Berry’s idea, dreamt up a more precise cousin using computer programs instead of English-language descriptions, and this clever shift turned out to yield a radically new proof of, and perspective on, Gödel’s 1931 theorem. From there, Chaitin and others went on to develop an important new branch of mathematics known as “algorithmic information theory”. To go into that would carry us far afield, but I hope to have conveyed a sense for the richness of Berry’s insight, for this was the breeding ground for Gödel’s revolutionary ideas.
A Peanut-butter and Barberry Sandwich
Bertrand Russell’s attempt to bar Berry’s paradoxical construction by instituting a formalism that banned all self-referring linguistic expressions and self-containing sets was not only too hasty but quite off base. How so? Well, a friend of mine recently told me of a Russell-like ban instituted by a friend of hers, a young and idealistic mother. This woman, in a well-meaning gesture, had strictly banned all toy guns from her household. The ban worked for a while, until one day when she fixed her kindergarten-age son a peanut-butter sandwich. The lad quickly chewed it into the shape of a pistol, then lifted it up, pointed it at her, and shouted, “Bang bang! You’re dead, Mommy!” This ironic anecdote illustrates an important lesson: the medium that remains after all your rigid bans may well turn out to be flexible enough to fashion precisely the items you’ve banned.
And indeed, Russell’s dismissal of Berry had little effect, for more and more paradoxes were being invented (or unearthed) in those intellectually tumultuous days at the turn of the twentieth century. It was in the air that truly peculiar things could happen when modern cousins of various ancient paradoxes cropped up inside the rigorously logical world of numbers, a world in which nothing of the sort had ever been seen before, a pristine paradise in which no one had dreamt paradox might arise.
Although these new kinds of paradoxes felt like attacks on the beautiful, sacred world of reasoning and numbers (or rather,
because
of that worrisome fact), quite a few mathematicians boldly embarked upon a quest to come up with ever deeper and more troubling paradoxes — that is, a quest for ever more powerful threats to the foundations of their own discipline! This sounds like a perverse thing to do, but they believed that in the long run such a quest would be very healthy for mathematics, because it would reveal key weak spots, showing where shaky foundations had to be shored up so as to become unassailable. In short, plunging deeply into the new wave of paradoxes seemed to be a useful if not indispensable activity for anyone working on the foundations of mathematics, for the new paradoxes were opening up profound questions concerning the nature of reasoning — and thus concerning the elusive nature of thinking — and thus concerning the mysterious nature of the human mind itself.
An Autobiographical Snippet
As I mentioned in Chapter 4, at age fourteen I ran across Ernest Nagel and James R. Newman’s little gem,
Gödel’s Proof,
and through it I fell under the spell of the paradox-skirting ideas on which Gödel’s work was centered. One of the stranger loops connected with that period in my life was that I became acquainted with the Nagel family at just that time. Their home was in Manhattan, but they were spending the academic year 1959–60 “out west” at Stanford, and since Ernest Nagel and my father were old friends, I soon got to know the whole family. Shortly after the Nagels’ Stanford year was over, I savored the twisty pleasure of reading aloud the whole of
Gödel’s Proof
to my friend Sandy, their older son, in the verdant yard of their summer home in the gentle hills near Brattleboro, Vermont. Sandy was just my age, and we were both exploring mathematics with a kind of wild intoxication that only teen-agers know.
Part of what pulled me so intensely was the weird loopiness at the core of Gödel’s work. But the other half of my intense curiosity was my sense that what was
really
being explored by Gödel, as well as by many people he had inspired, was the mystery of the human mind and the mechanisms of human thinking. So many questions seemed to have been suddenly and sharply brought into light by Gödel’s 1931 article — questions such as…
What happens inside mathematicians’ heads when they do their most creative work? Is it always just rule-bound symbol manipulation, deriving theorems from a fixed set of axioms? What is the nature of human thought in general? Is what goes on inside our heads just a deterministic physical process? If so, are we all, no matter how idiosyncratic and sparkly, nothing but slaves to rigid laws governing the invisible particles out of which our brains are built? Could creativity ever emerge from a set of rigid rules governing minuscule objects or patterns of numbers? Could a rulegoverned machine be as creative as a human? Could a programmed machine come up with ideas not programmed into it in advance? Could a machine make its own decisions? Have its own opinions? Be confused? Know it was confused? Be unsure whether it was confused? Believe it had free will? Believe it didn’t have free will? Be conscious? Doubt it was conscious? Have a self, a soul, an “I”? Believe that its fervent belief in its “I” was only an illusion, but an
unavoidable
illusion?
Idealistic Dreams about Metamathematics
Back in those heady days of my youth, every time I entered a university bookstore (and that was as often as possible), I would instantly swoop down on the mathematics section and scour all the books that had to do with symbolic logic and the nature of symbols and meaning. Thus I bought book after book on these topics, such as Rudolf Carnap’s famous but forbidding
The Logical Syntax of Language
and Richard Martin’s
Truth and Denotation,
not to mention countless texts of symbolic logic. Whereas I very carefully read a few such textbooks, the tomes by Carnap and Martin just sat there on my shelf, taunting and teasing me, always seeming just out of reach. They were dense, almost impenetrably so — but I kept on thinking that if only someday, some grand day, I could finally read them and fully fathom them, then at last I would have penetrated to the core of the mysteries of thinking, meaning, creativity, and consciousness. As I look back now, that sounds ridiculously naïve (firstly to imagine this to be an attainable goal, and secondly to believe that those books in particular contained all the secrets), but at the time I was a true believer!
When I was sixteen, I had the unusual experience of teaching symbolic logic at Stanford Elementary School (my own elementary-school alma mater), using a brand-new text by the philosopher and educator Patrick Suppes, who happened to live down the street from our family, and whose classic
Introduction to Logic
had been one of my most reliable guides. Suppes was conducting an experiment to see if patterns of strict logical inference could be inculcated in children in the same way as arithmetic could, and the school’s principal, who knew me well from my years there, one day bumped into me in the school’s rotunda, and asked me if I would like to teach the sixth-grade class (which included my sister Laura) symbolic logic three times a week for a whole year. I fairly jumped at the chance, and all year long I thoroughly enjoyed it, even if a few of the kids now and then gave me a hard time (rubber bands in the eye, etc.). I taught my class the use of many rules of inference, including the mellifluous
modus tollendo tollens
and the impressive-sounding “hypothetical syllogism”, and all the while I was honing my skills not only as a novice logician but also as a teacher.