Read Superintelligence: Paths, Dangers, Strategies Online
Authors: Nick Bostrom
Tags: #Science, #Philosophy, #Non-Fiction
True superintelligence (as opposed to marginal increases in current levels of intelligence) might plausibly first be attained via the AI path. There are, however, many fundamental uncertainties along this path. This makes it difficult to rigorously assess how long the path is or how many obstacles there are along the way. The whole brain emulation path also has some chance of being the quickest route to superintelligence. Since progress along this path requires mainly incremental technological advances rather than theoretical breakthroughs, a strong case can be made that it will eventually succeed. It seems fairly likely, however, that even if progress along the whole brain emulation path is swift, artificial intelligence will nevertheless be first to cross the finishing line: this is because of the possibility of neuromorphic AIs based on partial emulations.
Biological cognitive enhancements are clearly feasible, particularly ones based on genetic selection. Iterated embryo selection currently seems like an especially promising technology. Compared with possible breakthroughs in machine intelligence, however, biological enhancements would be relatively slow and gradual. They would, at best, result in relatively weak forms of superintelligence (more on this shortly).
The clear feasibility of biological enhancement should increase our confidence that machine intelligence is ultimately achievable, since enhanced human scientists and engineers will be able to make more and faster progress than their
au naturel
counterparts. Especially in scenarios in which machine intelligence is delayed beyond mid-century, the increasingly cognitively enhanced cohorts coming onstage will play a growing role in subsequent developments.
Brain–computer interfaces look unlikely as a source of superintelligence. Improvements in networks and organizations might result in weakly superintelligent forms of collective intelligence in the long run; but more likely, they will play an enabling role similar to that of biological cognitive enhancement, gradually increasing humanity’s effective ability to solve intellectual problems. Compared with biological enhancements, advances in networks and organization will make a difference sooner—in fact, such advances are occurring continuously and are having a significant impact already. However, improvements in networks and organizations may yield narrower increases in our problem-solving capacity than will improvements in biological cognition—boosting “collective intelligence” rather than “quality intelligence,” to anticipate a distinction we are about to introduce in the next chapter.
So what, exactly, do we mean by “superintelligence”? While we do not wish to get bogged down in terminological swamps, something needs to be said to clarify the conceptual ground. This chapter identifies three different forms of superintelligence, and argues that they are, in a practically relevant sense, equivalent. We also show that the potential for intelligence in a machine substrate is vastly greater than in a biological substrate. Machines have a number of fundamental advantages which will give them overwhelming superiority. Biological humans, even if enhanced, will be outclassed.
Many machines and nonhuman animals already perform at superhuman levels in narrow domains. Bats interpret sonar signals better than man, calculators outperform us in arithmetic, and chess programs beat us in chess. The range of specific tasks that can be better performed by software will continue to expand. But although specialized information processing systems will have many uses, there are additional profound issues that arise only with the prospect of machine intellects that have enough general intelligence to substitute for humans across the board.
As previously indicated, we use the term “superintelligence” to refer to intellects that greatly outperform the best current human minds across many very general cognitive domains. This is still quite vague. Different kinds of system with rather disparate performance attributes could qualify as superintelligences under this definition. To advance the analysis, it is helpful to disaggregate this simple notion of superintelligence by distinguishing different bundles of intellectual super-capabilities. There are many ways in which such decomposition could be done. Here we will differentiate between three forms: speed superintelligence, collective superintelligence, and quality superintelligence.
A speed superintelligence is an intellect that is just like a human mind but faster. This is conceptually the easiest form of superintelligence to analyze.
1
We can define speed superintelligence as follows:
Speed superintelligence:
A system that can do all that a human intellect can do, but much faster
.
By “much” we here mean something like “multiple orders of magnitude.” But rather than try to expunge every remnant of vagueness from the definition, we will entrust the reader with interpreting it sensibly.
2
The simplest example of speed superintelligence would be a whole brain emulation running on fast hardware.
3
An emulation operating at a speed of ten thousand times that of a biological brain would be able to read a book in a few seconds and write a PhD thesis in an afternoon. With a speedup factor of a million, an emulation could accomplish an entire millennium of intellectual work in one working day.
4
To such a fast mind, events in the external world appear to unfold in slow motion. Suppose your mind ran at 10,000×. If your fleshly friend should happen to drop his teacup, you could watch the porcelain slowly descend toward the carpet over the course of several hours, like a comet silently gliding through space toward an assignation with a far-off planet; and, as the anticipation of the coming crash tardily propagates through the folds of your friend’s gray matter and from thence out into his peripheral nervous system, you could observe his body gradually assuming the aspect of a frozen oops—enough time for you not only to order a replacement cup but also to read a couple of scientific papers and take a nap.
Because of this apparent time dilation of the material world, a speed superintelligence would prefer to work with digital objects. It could live in virtual reality and deal in the information economy. Alternatively, it could interact with the physical environment by means of nanoscale manipulators, since limbs at such small scales could operate faster than macroscopic appendages. (The characteristic frequency of a system tends to be inversely proportional to its length scale.
5
) A fast mind might commune mainly with other fast minds rather than with bradytelic, molasses-like humans.
The speed of light becomes an increasingly important constraint as minds get faster, since faster minds face greater opportunity costs in the use of their time for traveling or communicating over long distances.
6
Light is roughly a million times faster than a jet plane, so it would take a digital agent with a mental speedup of 1,000,000× about the same amount of subjective time to travel across the globe as it does a contemporary human journeyer. Dialing somebody long distance would take as long as getting there “in person,” though it would be cheaper as a call would
require less bandwidth. Agents with large mental speedups who want to converse extensively might find it advantageous to move near one another. Extremely fast minds with need for frequent interaction (such as members of a work team) may take up residence in computers located in the same building to avoid frustrating latencies.
Another form of superintelligence is a system achieving superior performance by aggregating large numbers of smaller intelligences:
Collective superintelligence:
A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system
.
Collective superintelligence is less conceptually clear-cut than speed superintelligence.
7
However, it is more familiar empirically. While we have no experience with human-level minds that differ significantly in clock speed, we
do
have ample experience with collective intelligence, systems composed of various numbers of human-level components working together with various degrees of efficiency. Firms, work teams, gossip networks, advocacy groups, academic communities, countries, even humankind as a whole, can—if we adopt a somewhat abstract perspective—be viewed as loosely defined “systems” capable of solving classes of intellectual problems. From experience, we have some sense of how easily different tasks succumb to the efforts of organizations of various size and composition.
Collective intelligence excels at solving problems that can be readily broken into parts such that solutions to sub-problems can be pursued in parallel and verified independently. Tasks like building a space shuttle or operating a hamburger franchise offer myriad opportunities for division of labor: different engineers work on different components of the spacecraft; different staffs operate different restaurants. In academia, the rigid division of researchers, students, journals, grants, and prizes into separate self-contained disciplines—though unconducive to the type of work represented by this book—might (only in a conciliatory and mellow frame of mind) be viewed as a necessary accommodation to the practicalities of allowing large numbers of diversely motivated individuals and teams to contribute to the growth of human knowledge while working relatively independently, each plowing their own furrow.
A system’s collective intelligence could be enhanced by expanding the number or the quality of its constituent intellects, or by improving the quality of their organization.
8
To obtain a collective
superintelligence
from any present-day collective intelligence would require a very great degree of enhancement. The resulting system would need to be capable of vastly outperforming any current collective intelligence or other cognitive system across many very general domains. A new conference format that lets scholars exchange information more effectively, or
a new collaborative information-filtering algorithm that better predicted users’ ratings of books and movies, would clearly not on its own amount to anything approaching collective superintelligence. Nor would a 50% increase in the world population, or an improvement in pedagogical method that enabled students to complete a school day in four hours instead of six. Some far more extreme growth of humanity’s collective cognitive capacity would be required to meet the criterion of collective superintelligence.
Note that the threshold for collective superintelligence is indexed to the performance levels of the present—that is, the early twenty-first century. Over the course of human prehistory, and again over the course of human history, humanity’s collective intelligence
has
grown by very large factors. World population, for example, has increased by at least a factor of a thousand since the Pleistocene.
9
On this basis alone, current levels of human collective intelligence could be regarded as approaching superintelligence
relative to a Pleistocene baseline
. Some improvements in communications technologies—especially spoken language, but perhaps also cities, writing, and printing—could also be argued to have, individually or in combination, provided super-sized boosts, in the sense that if another innovation of comparable impact to our collective intellectual problem-solving capacity were to happen, it would result in collective superintelligence.
10
A certain kind of reader will be tempted at this point to interject that modern society does not seem so particularly intelligent. Perhaps some unwelcome political decision has just been made in the reader’s home country, and the apparent unwisdom of that decision now looms large in the reader’s mind as evidence of the mental incapacity of the modern era. And is it not the case that contemporary humanity is idolizing material consumption, depleting natural resources, polluting the environment, decimating species diversity, all the while failing to remedy screaming global injustices and neglecting paramount humanistic or spiritual values? However, setting aside the question of how modernity’s shortcomings stack up against the not-so-inconsiderable failings of earlier epochs, nothing in our definition of collective superintelligence implies that a society with greater collective intelligence is necessarily better off. The definition does not even imply that the more collectively intelligent society is
wiser
. We can think of wisdom as the ability to get the important things approximately right. It is then possible to imagine an organization composed of a very large cadre of very efficiently coordinated knowledge workers, who collectively can solve intellectual problems across many very general domains. This organization, let us suppose, can operate most kinds of businesses, invent most kinds of technologies, and optimize most kinds of processes. Even so, it might get a few key big-picture issues entirely wrong—for instance, it may fail to take proper precautions against existential risks—and as a result pursue a short explosive growth spurt that ends ingloriously in total collapse. Such an organization could have a very high degree of collective intelligence; if sufficiently high, the organization is a collective superintelligence. We should resist the temptation to roll every normatively desirable attribute into one giant amorphous concept of mental functioning, as though one could never
find one admirable trait without all the others being equally present. Instead, we should recognize that there can exist instrumentally powerful information processing systems—intelligent systems—that are neither inherently good nor reliably wise. But we will revisit this issue in
Chapter 7
.
Collective superintelligence could be either loosely or tightly integrated. To illustrate a case of loosely integrated collective superintelligence, imagine a planet,
MegaEarth
, which has the same level of communication and coordination technologies that we currently have on the real Earth but with a population one million times as large. With such a huge population, the total intellectual workforce on MegaEarth would be correspondingly larger than on our planet. Suppose that a scientific genius of the caliber of a Newton or an Einstein arises at least once for every 10 billion people: then on MegaEarth there would be 700,000 such geniuses living contemporaneously, alongside proportionally vast multitudes of slightly lesser talents. New ideas and technologies would be developed at a furious pace, and global civilization on MegaEarth would constitute a loosely integrated collective superintelligence.
11
If we gradually increase the level of integration of a collective intelligence, it may eventually become a unified
intellect
—a single large “mind” as opposed to a mere assemblage of loosely interacting smaller human minds.
12
The inhabitants of MegaEarth could take steps in that direction by improving communications and coordination technologies and by developing better ways for many individuals to work on any hard intellectual problem together. A collective superintelligence could thus, after gaining sufficiently in integration, become a “quality superintelligence.”
We can distinguish a third form of superintelligence.
Quality superintelligence:
A system that is at least as fast as a human mind and vastly qualitatively smarter
.
As with collective intelligence, intelligence quality is also a somewhat murky concept; and in this case the difficulty is compounded by our lack of experience with any variations in intelligence quality above the upper end of the present human distribution. We can, however, get some grasp of the notion by considering some related cases.
First, we can expand the range of our reference points by considering nonhuman animals, which have intelligence of lower quality. (This is not meant as a speciesist remark. A zebrafish has a quality of intelligence that is excellently adapted to its ecological needs; but the relevant perspective here is a more anthropocentric one: our concern is with performance on
humanly
relevant complex cognitive tasks.) Nonhuman animals lack complex structured language; they are capable of no or only rudimentary tool use and tool construction; they are severely restricted in their ability to make long-term plans; and they have very
limited abstract reasoning ability. Nor are these limitations fully explained by a lack of speed or of collective intelligence among nonhuman animal minds. In terms of raw computational power, human brains are probably inferior to those of some large animals, including elephants and whales. And although humanity’s complex technological civilization would be impossible without our massive advantage in collective intelligence, not all distinctly human cognitive capabilities depend on collective intelligence. Many are highly developed even in small, isolated hunter–gatherer bands.
13
And many are not nearly as highly developed among highly organized nonhuman animals, such as chimpanzees and dolphins intensely trained by human instructors, or ants living in their own large and well-ordered societies. Evidently, the remarkable intellectual achievements of
Homo sapiens
are to a significant extent attributable to specific features of our brain architecture, features that depend on a unique genetic endowment not shared by other animals. This observation can help us illustrate the concept of quality superintelligence: it is intelligence of quality at least as superior to that of human intelligence as the quality of human intelligence is superior to that of elephants’, dolphins’, or chimpanzees’.