Darwin Among the Machines (14 page)

Read Darwin Among the Machines Online

Authors: George B. Dyson

BOOK: Darwin Among the Machines
4.13Mb size Format: txt, pdf, ePub

That Turing was thinking seriously about computers during the war is best evidenced by his report, produced in the final three
months of 1945 for the National Physical Laboratory (NPL), entitled “Proposal for the Development in the Mathematics Division of an Automatic Computing Engine (ACE).”
34
Turing's design was commissioned by J. R. Womersly, superintendent of the Mathematics Division, who became interested in Turing machines before the war and had even suggested building one before strategic priorities intervened. At the end of the war Womersly had been sent to the United States to survey the latest (and still secret) computer developments, including the Harvard Mark I tape-controlled electronic calculator, which he described as “Turing in hardware” in a letter home.
35
Womersly reported to Douglas R. Hartree, who reported to Sir Charles Darwin, Director of NPL and grandson of
the
Charles Darwin. But Darwin was slow to take an interest in Turing's project, and the lumbering pace of the bureaucracy he commanded had already crippled the proposal by the time he applied his influence in an attempt to gain the project full support. Shuffled among a succession of departments, the original proposal was reconsidered to death. Turing's automatic computing engine, like Babbage's analytical engine, was never built.

Turing's proposal “synthesized the concepts of a stored-program universal computer, a floating-point subroutine library, artificial intelligence, details such as a hardware bootstrap loader, and much else.”
36
At a time when no such machines were in existence and the von Neumann architecture had only just been proposed, Turing produced a complete description of a million-cycle-per-second computer that foreshadowed the RISC (Reduced Instruction Set Computer) architecture that has now gained prominence after fifty years. The report was accompanied by circuit diagrams, a detailed physical and logical analysis of the internal storage system, sample programs, detailed (if bug-ridden) subroutines, and even an estimated (if unrealistic) cost of £11,200. As Sara Turing later explained, her son's goal was “to see his logical theory of a universal machine, previously set out in his paper ‘Computable Numbers,' take concrete form.”
37

Turing's design relied on mercury-filled acoustic delay lines for high-speed storage, a technique developed for processing radar signals by comparing a series of echoes to distinguish things that had moved and later applied to an early generation of computers, although “its programming,” as M. H. A. Newman said, “was like catching mice just as they were entering a hole in the wall.”
38
A series of electrical pulses, about a microsecond apart, were converted to a train of sound waves circulating in a long tube of mercury equipped with crystal transducers at both ends. About a thousand digits could be stored in the millisecond it took a train of pulses to travel the length of
a five-foot “tank.” Viewed as part of a finite-state Turing machine, the delay line represented a continuous loop of tape, a thousand squares in length and making a thousand complete passes per second under the read-write head. Turing specified some two hundred tubes, each storing thirty-two words of 32 bits, for a total, “comparable with the memory capacity of a minnow,” of about 200,000 bits.
39

“The property of being digital,” announced Turing to the London Mathematical Society in a 1947 lecture on his design, “should be of more interest than that of being electronic.”
40
Whether memory took the form of paper tape, vacuum-tube flip-flops, mercury pulse trains, or even papyrus scrolls did not matter as long as discrete symbols could be freely read, written, relocated, and, when so instructed, erased. The concept of random-access memory and the resulting ability to store and manipulate both instructions and data in common is considered to have been the key innovation in the development of electronic digital computers (producing twenty thousand pages of transcripts in the Honeywell-Sperry-Rand patent dispute alone). Both these developments were implicit in the concept of a one-tape Turing machine introduced in 1936. It made no difference whether binary digits (instructions, data, or temporary notes) were stored as sound waves in a vibrating column of mercury or as symbols on paper tape. But the five-channel tape readers of Colossus would have to be run at twelve hundred miles per hour to keep up with a single mercury delay-line store.

Turing's vision for the ACE became bogged down in an institutional quagmire and failed to get off the ground. The routine miracles of war, when Cambridge theoreticians were granted unlimited engineering resources and even the post office could be counted on to deliver new hardware overnight, did not survive the peace. Turing's decision that construction should be contracted out, as had been done for the Colossus, was in hindsight a mistake. But hindsight has shown that his design principles were sound. In May 1950 a partial prototype (the Pilot ACE) was finally built and “proved to be a far more powerful computer than we had expected,” wrote J. H. Wilkinson, even though its mercury delay lines only held three hundred words of 32 bits each. “Oddly enough much of its effectiveness sprang from what appeared to be weaknesses resulting from the economy in equipment that dictated its design.”
41

In July 1947, Turing took a leave of absence from NPL, returning to his King's College fellowship for a year. He resigned from NPL in May 1948, accepting an appointment to Manchester University, where M. H. A. Newman was germinating a mathematical computing
department with talent from Bletchley Park. Turing, restless as ever, helped get machines and programs up and running at Manchester while his attention wandered to other things. Foremost was his mathematical theory of morphogenesis, which he worked at simulating digitally, writing programs longhand in machine language using his own base-32 notation (the digits reversed to match the patterns of bits as displayed by the Williams-tube store). Another focus was a series of reflections on artificial intelligence, labeled “mechanical intelligence” in language that remains more precise. Here more than ever his iconoclasm found free reign. “An unwillingness to admit the possibility that mankind can have any rivals in intellectual power,” Turing wrote in 1948, “occurs as much amongst intellectual people as amongst others: they have more to lose.”
42

Turing's thoughts about hardware and software ranged far ahead of anything in existence at the time. His approach to the question of machine intelligence was as uncluttered as his approach to computable numbers ten years before. He faced the question of incompleteness once again. A brisk trade would soon develop around the rehashing of Gödel's proof of the incompleteness of formal systems, arguing whether this limitation constrained the abilities of computers to duplicate the intelligence and creativity of the human mind. Turing neatly summarized the essence (and weakness) of this convoluted argument in 1947, saying that “in other words then, if a machine is expected to be infallible, it cannot also be intelligent.”
43
To Turing this demonstrated not a theoretical obstacle, but simply the need to develop fallible machines able to learn from their own mistakes.

“The argument from Gödel's and other theorems rests essentially on the condition that the machine must not make mistakes,” he explained in a sabbatical report submitted to NPL in 1948. “But this is not a requirement for intelligence.”
44
Turing made several concrete proposals. He suggested incorporating a random element to create what he referred to as a “learning machine.” This proposal avoided the problem of having to specify all possible contingencies in advance by granting the computer an ability to take a wild guess and then either reinforce or discard the guess according to the consequent results. Guesses might be extended not only to external questions, but to modifications in the computer's own instructions. A machine could then learn to teach itself. “What we want is a machine that can learn from experience,” wrote Turing. “The possibility of letting the machine alter its own instructions provides the mechanism for this.”
45
In 1949, while developing the Manchester Mark I (commissioned by Ferranti Ltd. as the prototype of the first electronic digital computer to
be commercially produced), Turing designed a random-number generator that instead of producing pseudorandom numbers by a numerical process, included a source of truly random electronic noise.

Carrying these ideas one step further (although pointing out that “paper interference” with a universal machine was equivalent to “screwdriver interference” with actual parts), Turing developed the concept of “unorganized Machines . . . which are largely random in their construction [and] made up from a rather large number
N
of similar units.”
46
He considered a simple model with units capable of two possible states connected by two inputs and one output each, concluding that “machines of this character can behave in a very complicated manner when the number of units is large.” Turing showed how such unorganized machines (“about the simplest model of a nervous system”) could be made self-modifying and, with proper upbringing, could become more complicated than anything that could be otherwise engineered. The human brain must start out as such an unorganized machine, since only in this way could something so complicated be reproduced.

Turing perceived a parallel between intelligence and “the genetical or evolutionary search by which a combination of genes is looked for, the criterion being survival value. The remarkable success of this search confirms to some extent the idea that intellectual activity consists mainly of various kinds of search.”
47
He saw evolutionary computation as the best approach to truly intelligent machines. “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?” he asked.
48
“Bit by bit one would be able to allow the machine to make more and more ‘choices' or ‘decisions.' One would eventually find it possible to program it so as to make its behaviour the result of a comparatively small number of general principles. When these became sufficiently general, interference would no longer be necessary, and the machine would have ‘grown up.'”
49

An incremental, trial-and-error path toward artificial intelligence lay ahead. It is a misconception, based on the stereotype of a Turing machine as executing a prearranged program one step at a time, to assume that Turing believed that any single, explicitly programmed serial process would ever capture human intelligence in mechanical form. Turing knew how many interconnected neurons it took to make a brain, and he knew how many brains it took to form a society that could kindle the spark of language and intelligence into flame. He himself had drawn the curtains on Leibniz's illusion of an ideal, completely formalized logical system in 1936. And in 1939 even his
own attempt to transcend Gödelian incompleteness by his “Systems of Logic Based on Ordinals” had failed. In this sequel to “On Computable Numbers,” prepared in Princeton as his doctoral thesis under Alonzo Church, Turing explored “how far it is possible to eliminate intuition, and leave only ingenuity,” noting that since ingenuity can always be replaced by patience, “we do not mind how much ingenuity is required, and therefore assume it to be available in unlimited supply.”
50

Intelligence would never be clean and perfectly organized, but like the brain would remain slippery and disordered in its details. The secret of large, reliable, and flexible machines, as Turing noted, is to construct them, or let them construct themselves, from large numbers of individual parts—independently free to make mistakes, search randomly, and generally act unpredictably so that at a much higher level of the hierarchy the machine appears to be making an intelligent choice. It is an appealing model—advocated by Oliver Selfridge in his
Pandemonium
of 1959, I. J. Good in his
Speculations Concerning the First Ultraintelligent Machine
(1965), and Marvin Minsky in his
Society of Mind
(1985). A similar principle of distributed intelligence (enforced by need-to-know security rules) led to successful code breaking at Bletchley Park.

The Turing machine, as a universal representation of the relations between patterns in space and sequences in time, has given these intuitive models of intelligence a common language that translates freely between concrete and theoretical domains. Turing's machine has grown progressively more universal for sixty years. From McCulloch and Pitts's demonstration of the equivalence between Turing machines and neural nets in 1943 to John von Neumann's statement that “as far as the machine is concerned, let the whole outside world consist of a long paper tape,”
51
the Turing machine has established the measure by which all models of computation have been defined. Only in theories of quantum computation—in which quantum superposition allows multiple states to exist at the same time—have the powers of the discrete-state Turing machine been left behind.

All intelligence is collective. The truth that escaped Leibniz, but captured Turing, is that this intelligence—whether that of a billion neurons, a billion microprocessors, or a billion molecules forming a single cell—arises not from the unfolding of a predetermined master plan, but by the accumulation of random bits of wisdom through the power of small mistakes. The logicians of Bletchley Park breathed the spark of intelligence into the Colossus not by training the machine to
recognize the one key that held the answer, but by training it to eliminate the billions of billions of keys that probably wouldn't fit.

Other books

City of the Lost by Kelley Armstrong
To Have a Wilde (Wilde in Wyoming) by Terry, Kimberly Kaye
The House in Paris by Elizabeth Bowen
Confessions of a Teenage Psychic by Pamela Woods-Jackson
Dog War by Anthony C. Winkler
Banquet for the Damned by Adam Nevill
Alien Sex Attack by Catherine DeVore