Darwin Among the Machines (33 page)

Read Darwin Among the Machines Online

Authors: George B. Dyson

BOOK: Darwin Among the Machines
5.65Mb size Format: txt, pdf, ePub

In 1677, William Petty, in a letter to his cousin Robert Southwell on “The Scale of Creatures,” wrote that “between God and man, there are holy Angells, Created Intelligences, and subtile materiall beings; as there are between man and the lowest animall a multitude of intermediate natures.”
45
Whether he saw economic systems as among these created intelligences is left unsaid. By the time of Alfred Smee, the forest was obscured by the trees. Proposing, in 1851, that mechanical processing of ideas would require a relational and differential machine the size of the City of London, Smee failed to notice from his quarters on Threadneedle Street that the Bank of England's network of linked transactions, mediated by a hive of accountants, already constituted such a machine. “The average daily transactions in the London Bankers' Clearing House amount to about twenty millions of pounds sterling, which if paid in gold coin would weigh about 157 tons,” reported Stanley Jevons in 1896.
46

John von Neumann, although halted in midstream, was working toward a theory of the economy of mind. In the universe according to von Neumann, life and nature are playing a zero-sum game. Physics is the rules. Economics—which von Neumann perceived as closely related to thermodynamics—is the study of how organisms and organizations develop strategies that increase their chances for reward. Von Neumann and Morgenstern showed that the formation of coalitions holds the key, a conclusion to which all observed evidence, including Nils Barricelli's experiments with numerical symbioorganisms, lends support. These coalitions are forged on many levels—between molecules, between cells, between groups of neurons,
between individual organisms, between languages, and between ideas. The badge of success is worn most visibly by the members of a species, who constitute an enduring coalition over distance and over time. Species may in turn form coalitions, and, perhaps, biology may form coalitions with geological and atmospheric processes otherwise viewed as being on the side of nature, not on the side of life.

Coalitions, once established, can be maintained across widening gaps, such as the levels of abstraction that separate the metaphysics of a language from the metabolism of its host. Fortunes shift, and if a symbiont develops a strategy that dominates the behavior of its host, the roles may be reversed. Our own species is doing its best to adjust to a three-way coalition of self-reproducing human beings, self-reproducing numbers, and self-reproducing machines. Signs of intelligence are evident at every turn, but because this intelligence envelopes us in all directions the whole picture lies beyond our grasp. We have made only limited progress in the three hundred years since Robert Hooke explained how the soul is somehow “apprehensive” of “a continued Chain of Ideas coyled up in the Repository of the Brain.”
47
What mind, if any, will become apprehensive of the great coiling of ideas now under way is not a meaningless question, but it is still too early in the game to expect an answer that is meaningful to us.

10
T
HERE'S
P
LENTY OF
R
OOM AT THE
T
OP

We're doing this the way you'd plan walkways in a park: Plant grass, then put sidewalks where the paths form
.

—
JOE VAN LONE
1

“T
here's Plenty of Room at the Bottom” was the title of an after-dinner talk given by physicist Richard Feynman at the California Institute of Technology on 29 December 1959. Feynman's timing was perfect. He kept his audience awake with a series of outlandish speculations that soon turned out to be spectacularly right. “In the year 2000, when they look back at this age,” announced Feynman, “they will wonder why it was not until the year 1960 that anybody began seriously to move in this direction.” Imagining small machines being instructed to build successively smaller and smaller machines, Feynman estimated the orders of magnitude by which such devices could become cheaper, faster, more numerous, and collectively more powerful. Molecules, and eventually atoms, would supply mass-produced low-cost parts.

“Computing machines are very large; they fill rooms,” said Feynman. “Why can't we make them very small, make them of little wires, little elements—and by little, I mean little. For instance, the wires should be 10 or 100 atoms in diameter, and the circuits should be a few thousand angstroms across.” Besides all the other good reasons to avoid building computers the size (and cost) of the Pentagon, Feynman pointed out that “information cannot go any faster than the speed of light—so, ultimately, when our computers get faster and faster and more and more elaborate, we will have to make them smaller and smaller.

“How can we make such a device? What kind of manufacturing processes would we use?” Feynman asked. “One possibility we might
consider, since we have talked about writing by putting atoms down in a certain arrangement, would be to evaporate the material, then evaporate the insulator next to it. Then, for the next layer, evaporate another position of a wire, another insulator, and so on. So, you simply evaporate until you have a block of stuff which has the elements—coils and condensers, transistors and so on—of exceedingly fine dimensions.”
2

Feynman did not limit his speculations to electronic microprocessors, however intriguing or lucrative these prospects might be, but continued on down to atom-by-atom manufacturing, “something, in principle, that can be done; but, in practice, has not been done because we are too big.” He greeted the implications with an enthusiasm “inspired by the biological phenomena in which chemical forces are used in a repetitious fashion to produce all kinds of weird effects (one of which is the author).”
3
He left other even weirder effects unsaid. Many of Feynman's techniques are now in routine use, the convergence between microbiology and microtechnology steadily eroding the underpinnings of distinction between living organisms and machines. No new laws of physics have turned up to render his predictions less probable than they were in 1959.

Yes, there is plenty of room at the bottom—but nature got there first. Life began at the bottom. Microorganisms have had time to settle in; most available ecological niches have long been filled. Many steps higher on the scale, insects have been exploring millimeter-scale engineering and socially distributed intelligence for so long that it would take a concerted effort to catch up. Insects might be reinvented from the top down by the miniaturization of machines, but we are more likely to reinvent them from the bottom up, by recombinant entomology, for the same reasons we are reengineering existing one-celled organisms rather than developing new ones from scratch.

Things are cheaper and faster at the bottom, but it is much less crowded at the top. The size of living organisms has been limited by gravity, chemistry, and the inability to keep anything much larger than a dinosaur under central-nervous-system control. Life on earth made it as far as the blue whale, the giant sequoia, the termite colony, the coral reef—and then we came along. Large systems, in biology as in bureaucracy, are relatively slow. “I find it no easier to picture a completely socialized British Empire or United States,” wrote J. B. S. Haldane, “than an elephant turning somersaults or a hippopotamus jumping a hedge.”
4

Life now faces opportunities of unprecedented scale. Microprocessors divide time into imperceptibly fine increments, releasing signals
that span distance at the speed of light. Systems communicate globally and endure indefinitely over time. Large, long-lived, yet very fast composite organisms are free from the constraints that have limited biology in the past. Since the process of organizing large complex systems remains mysterious to us, we have referred to these developments as self-organizing systems or self-organizing machines.

Theories of self-organization became fashionable in the 1950s, generating the same excitement (and disappointments) that the “new” science of complexity has generated in recent years. Self-organization appeared to hold the key to natural phenomena such as morphogenesis, epigenesis, and evolution, inviting the deliberate creation of systems that grow and learn. Unifying principles were discovered among organizations ranging from a single cell to the human nervous system to a planetary ecology, with implications for everything in between. All hands joined in. Alan Turing was working on a mathematical model of morphogenesis, theorizing how self-organizing chemical processes might govern the growth of living forms, when his own life came to an end in 1954; John von Neumann died three years later in the midst of developing a theory of self-reproducing machines.

“The adjective [self-organizing] is, if used loosely, ambiguous, and, if used precisely, self-contradictory,” observed British neurologist W. Ross Ashby in 1961. “There is a first meaning that is simple and unobjectionable,” Ashby explained. “This refers to the system that starts with its parts separate (so that the behavior of each is independent of the others' states) and whose parts then act so that they change towards forming connections of some type. Such a system is ‘self-organizing' in the sense that it changes from ‘parts separated' to ‘parts joined.' An example is the embryo nervous system, which starts with cells having little or no effect on one another, and changes, by the growth of dendrites and formation of synapses, to one in which each part's behavior is very much affected by the other parts.”
5
The second type of self-organizing behavior—where interconnected components become organized in a productive or meaningful way—is perplexing to define. In the infant brain, for example, self-organization is achieved less by the growth of new connections and more by allowing meaningless connections to die out. Meaning, however, has to be supplied from outside. Any individual system can only be self-organizing with reference to some other system; this frame of reference may be as complicated as the visible universe or as simple as a single channel of Morse code.

William Ross Ashby (1903–1972) began his career as a psychiatrist, diversifying into neurology by way of pathology after serving in
the Royal Medical Corps during World War II. By studying the structure of the human brain and the peculiarities of human behavior, he sought to unravel the mysteries in between. Like von Neumann, he hoped to explain how mind can be so robust yet composed of machinery so frail. Two years before his death, Ashby reported on a series of computer simulations measuring the stability of complex dynamic systems as a function of the degree of interconnection between component parts. The evidence suggested that “all large complex dynamic systems may be expected to show the property of being stable up to a critical level of connectance, and then, as the connectance increases, to go suddenly unstable.”
6
Implications range from the origins of schizophrenia to the stability of market economies and the performance of telecommunications webs.

Ashby formulated a concise set of principles of self-organizing systems in 1947, demonstrating “that a machine can be at the same time (a) strictly determinate in its actions, and (b) yet demonstrate a self-induced change of organisation.”
7
This work followed an earlier paper on adaptation by trial and error, written in 1943 but delayed by the war, in which he observed that “an outstanding property of the nervous system is that it is self-organizing, i.e., in contact with a new environment the nervous system tends to develop that internal organization which leads to behavior adapted to that environment.”
8
Generalizing such behavior so that it was “not in any way restricted to mechanical systems with Newtonian dynamics,” Ashby concluded that “‘adaptation by trial and error' . . . is in no way special to living things, that it is an elementary and fundamental property of all matter, and . . . no ‘vital' or ‘selective' hypothesis is required.”
9
Starting from a rigorous definition of the concepts of environment, machine, equilibrium, and adaptation, he developed a simple mathematical model showing how changes in the environment cause a machine to break, that is, to switch to a different equilibrium state. “The development of a nervous system will provide vastly greater opportunities both for the number of breaks available and also for complexity and variety of organization,” he wrote. “The difference, from this point of view, is solely one of degree.”
10

When the cybernetics movement took form in the postwar years, Ashby's ideas were folded in. His
Design for a Brain: The Origin of Adaptive Behaviour
, published in 1952, was adopted as one of the central texts in the new field. Ashby's “homeostat,” the electromechanical embodiment of his ideas on equilibrium-seeking machines, behaved like a cat that turns over and goes back to sleep when it is disturbed. His “Law of Requisite Variety” held that the complexity of
an effective control system corresponds to the complexity of the system under its control.

Ashby believed that the “spontaneous generation of organization” underlying the origins of life and other improbabilities was not the exception but the rule. “Every isolated determinate dynamic system obeying unchanging laws will develop ‘organisms' that are adapted to their ‘environments,'” he argued. “There is no difficulty, in principle, in developing synthetic organisms as complex and as intelligent as we please. But . . . their intelligence will be an adaptation to, and a specialization towards, their particular environment, with no implication of validity for any other environment such as ours.”
11

Other books

Shamanspace by Steve Aylett
Lie Still by Julia Heaberlin
Tappin' On Thirty by Candice Dow
Loose Screws by Karen Templeton
Archer's Angels by Tina Leonard
Absolution River by Aaron Mach
Caught Redhanded by Gayle Roper
Avenge by Sarah M. Ross
in0 by Unknown