Darwin Among the Machines (34 page)

Read Darwin Among the Machines Online

Authors: George B. Dyson

BOOK: Darwin Among the Machines
12.18Mb size Format: txt, pdf, ePub

According to Ashby, high-speed digital computers offered a bridge between laws and life. “Until recently we have had no experience of systems of medium complexity; either they have been like the watch and the pendulum, and we have found their properties few and trivial, or they have been like the dog and the human being, and we have found their properties so rich and remarkable that we have thought them supernatural. Only in the last few years has the general-purpose computer given us a system rich enough to be interesting yet still simple enough to be understandable . . . it enables us to bridge the enormous conceptual gap from the simple and understandable to the complex.” To understand something as complicated as life or intelligence, advised Ashby, we need to retrace its steps. “We can gain a considerable insight into the so-called spontaneous generation of life by just seeing how a somewhat simpler version will appear in a computer,” he noted in 1961.
12

The genesis of life or intelligence within or among computers goes approximately as follows: (1) make things complicated enough, and (2) either wait for something to happen by accident or make something happen by design. The best approach may combine elements of both. “My own guess is that, ultimately, efficient machines having artificial intelligence will consist of a symbiosis of a general-purpose computer together with locally random or partially random networks,” concluded Irving J. Good in 1958. “The parts of thinking that we have analyzed completely could be done on the computer. The division would correspond roughly to the division between the conscious and unconscious minds.”
13
A random network need not be implemented by a random configuration of neurons, wires, or switches; it can be represented by logical relationships evolved in an ordered matrix of two-state devices if the number of them is large enough. This possibility was inherent in John von Neumann's original conception of the digital computer as an association of discrete logical
elements, a population that just so happened to be organized by its central control organ for the performance of arithmetical operations but that could in principle be organized differently, or even be allowed to organize itself. Success at performing arithmetic, however, soon preempted everything else.

An early attempt at invoking large-scale self-organizing processes within a single computer was a project aptly christened Leviathan, developed in the late 1950s at the System Development Corporation, a spin-off division of RAND. Leviathan proposed to capture a behavioral model of a semiautomatic air-defense system that had grown too complicated for any predetermined model to comprehend. Leviathan (the single-computer model) and SAGE (its multiple-computer subject) jointly represented the transition from computer systems designed and organized by engineers to computer systems that were beginning to organize themselves.

The System Development Corporation (SDC) originated in the early 1950s with a series of RAND Corporation studies for the U.S. Air Force on the behavior of complex human-machine systems under stress. In 1951, behind a billiard parlor at Fourth and Broadway in downtown Santa Monica, California, RAND constructed a replica of the Tacoma Air Defense Direction Center, where the behavior of real humans and real machines was studied under simulated enemy attack. The first series of fifty-four experiments took place between February and May 1952, using twenty-eight human subjects provided with eight simulated radar screens under punched-card control. In studying the behavior of their subjects—students hired by the hour from UCLA—it was discovered that participation in the simulations so improved performance that the air force asked RAND to train real air-defense crews instead. “The organization learned its way right out of the experiment,” the investigators reported in a summary of the tests. “Within a couple days the college students were maintaining highly effective defense of their area while playing word games and doing homework on the side.”
14
The study led to the establishment of a permanent System Research Laboratory within RAND's System Development Division and to a training system duplicated at 150 operational air-defense sites.

RAND's copy of the IAS computer became operational in 1952, followed by delivery of an IBM 701, the first system off the assembly line, in August 1953. The computer systems used to stage RAND's simulations soon became more advanced than the control systems used in actual air defense. “We found that to study an organization in the laboratory we, as experimenters had to become one,” wrote Allen
Newell, who went on to become one of the leaders of artificial intelligence research.
15
Repeating the process by which human intelligence may have first evolved, an observational model developed into a system of control. RAND's contracts were extended to include designing as well as simulating the complex information-processing systems needed for air defense. “The simplest way of summarizing the incidents, impressions, and data of the air-defense experiments,” reported Newell, “is to say that the four organizations behaved like organisms.”
16
RAND's studies were among the first to examine how large information-processing systems not only facilitate the use of computers by human beings but facilitate the use of human beings by machines. As John von Neumann pointed out, “the best we can do is to divide all processes into those things which can be better done by machines and those which can be better done by humans and then invent methods by which to pursue the two.”
17

By the time it became an independent, nonprofit corporation at the end of 1956, the System Development Division employed one thousand people and had grown to twice the size of the rest of RAND. When the air force contracted jointly with the Lincoln Laboratory at MIT and the RAND Corporation to develop the continental air-defense system known as SAGE (Semi-Automatic Ground Environment), the job of programming the system was delegated to SDC. Bell Telephone Laboratories and IBM were offered the contract but both declined. “We couldn't imagine where we could absorb 2,000 programmers at IBM when this job would be over someday,” said Robert Crago, “which shows how well we were understanding the future at that time.”
18

SAGE integrated hundreds of channels of information related to air defense, coordinating the tracking and interception of military targets as well as peripheral details, such as some thirty thousand scheduled airline flight paths augmented by all the unscheduled flight plans on file at any given time. Each of some two dozen air-defense sector command centers, housed in windowless buildings protected by six feet of blast-resistant concrete, was based around an AN-FSQ-7 computer (Army-Navy Fixed Special eQuipment) built by IBM. Two identical processors shared 58,000 vacuum tubes, 170,000 diodes, and 3,000 miles of wiring as one ran the active system and the other served as a “warm” backup, running diagnostic routines while standing by to be switched over to full control at any time. These systems weighed more than 250 tons. The computer occupied 20,000 square feet of floor space; input and output equipment consumed another 22,000 square feet. A 3,000-kilowatt power supply and 500
tons of air-conditioning equipment kept the laws of thermodynamics at bay. One hundred air force officers and personnel were on duty at each command center; the system was semiautomatic in that SAGE supplied predigested intelligence to its human operators, who then made the final decisions as to how the available defenses should respond.

The use of one computer by up to one hundred simultaneous operators ushered in the era of time-share computing and opened the door to the age of data networking that followed. The switch from batch processing, when you submit a stack of cards and come back hours or days later to get the results, to real-time computing was sparked by SAGE's demand for instantaneous results. The SAGE computers, descended from the Whirlwind prototype constructed at MIT, also led the switch to magnetic-core memory, storing 8,192 33-bit words, increased to 69,632 words in 1957 as the software grew more complex. The memory units were glass-faced obelisks housing a stack of thirty-six ferrite-core memory planes. It took forty hours of painstaking needlework to thread a single plane; each of its 4,096 ferrite beads was interlaced by fine wires in four directions, the intersections weaving a tapestry of cross-referenced electromagnetic bits. The read-write cycle was six microseconds, shuttling data back and forth nearly 200,000 times from one second to the next. High-speed magnetic drums and 728 individual tape drives supplied peripheral programs and data, and traffic with the network of radar-tracking stations pioneered high-speed (1,300 bits per second) data transmission over the voice telephone system using lines leased from AT&T.

The prototype Cape Cod station was operational in 1953; twenty-three sectors were deployed by 1958; the last six SAGE sector control centers were shut down in January 1984, having outlived all other computers of their time. SAGE was designed to defend against land-based bombers; the age of ballistic missiles left its command centers vulnerable to attack. As the prototype of a real-time global information-processing system, however, SAGE left instructions that are still going strong.

The SAGE operating system incorporated one million lines of code, by far the largest software project of its time. Each control sector was configured differently, yet all sectors had to interact smoothly under stress. To this day no one knows how the system would have behaved in response to a real attack. Even the principal architects of the operating system spoke of it as having been evolved rather than designed. When human beings were added, the behavior of the system became even less predictable, so it was tested regularly with simulated intrusions and mock attacks. The dual-processor configuration
allowed these exercises to be conducted using one-half of the system while the other half remained on-line, like the right and left hemispheres of a brain. Exhibiting a quality that some theorists suggest distinguishes organisms from machines, the SAGE system was so complicated that there appeared to be no way to model its behavior more concisely than by putting the system through its paces and observing the results.

Despite the experience with SAGE, in the RAND tradition of giving every hypothesis a chance the Leviathan project was launched. Leviathan was an attempt to let a model design itself. “Leviathan is the name we give to a highly adaptable model of large behavioral systems, designed to be placed in operation on a digital computer,” wrote Beatrice and Sydney Rome in a report first published on 29 January 1959.
19
The air force's problem was that its systems were structured and analyzed hierarchically, but when operated under pressure unforeseen relationships caused things to happen between different levels, with unanticipated, and perhaps catastrophic, results. “The problem of system levels . . . pervades the investigation of any subject matter that incorporates symbols,” wrote the Romes, philosophers by profession and biographers of the seventeenth-century philosopher Nicolas Malebranche. “An example of the latter is any work of art, but the example we shall offer here is drawn from simulating air defense.”
20
An oblique reference to “other kinds of systems of command and authority that produce a product or that render a constructive or destructive service” was as close as the Romes came to acknowledging the policy of assured retaliation underlying the air force's interest in decision making by human-machine systems under stress.
21
References to “special Leviathan pushbutton intervention equipment” sound sinister, but only referred to circuits installed at SDC's System Simulation Research Laboratory to allow human operators to input decisions used in training the Leviathan program during tests.

To construct their model, the Romes proposed using a large digital computer as a self-organizing logical network rather than as a data processor proceeding through a sequence of logical steps. “Let us suppose that we decide to use the computer in a more direct, a non-computational way. The binary states of the cores are theoretically subject to change thousands of times each second. If we can somehow induce some percentage of these to enter into processes of dynamic interaction with one another under controllable conditions, then direct simulation may be possible. A million cells of storage subject to rapid individual change may provide a mesh of sufficiently fine grain.”
22

This mesh was to be inoculated with a population of artificial agents corresponding to elements of the reality the model was intended to learn to represent: “The notion of direct representation of real social facts by means of activations in a computing machine can be made clearer if we view the computer employed in such an enterprise as a system of switches, comparable to a large telephone system. . . . Our program, then, begins with a design for an automaton that will be made to operate in a computer. Circuits will be so imposed on the computer that their performance will be analogous to the operations of human agents in real social groups. Next, we shall cut the computer free from its ordinarily employed ability to accomplish mathematical computations. . . . The patterns and entailments of the specific activations will be controlled so that they will be isomorphic with significant aspects of individual human actions. Thus, in a very precise sense, we shall be using a digital computer in an analogue mode. . . . The micro-processes in the computer resemble the micro-functioning of real agents under social constraints.”
23
In their 1962 memorandum, the Romes were more specific about the workings of these “artificial agents,” even coining a new unit, the “taylor” (after F. W. Taylor, founder of time-and-motion studies), to measure the relative values of four different kinds of “social energy” with which individual agents were endowed.

Other books

Her Every Wish by Courtney Milan
Kick by C.D. Reiss
The Legacy by Adams, J.
Home To India by Jacquelin Singh
Hopping Mad by Franklin W. Dixon
The Winter of Her Discontent by Kathryn Miller Haines
Vulnerable (Barons of Sodom) by Blake, Abriella
The Icy Hand by Chris Mould
Death as a Last Resort by Gwendolyn Southin
Magic of Thieves by C. Greenwood