Read Superintelligence: Paths, Dangers, Strategies Online
Authors: Nick Bostrom
Tags: #Science, #Philosophy, #Non-Fiction
5
. Barber (1991) suggests that the Yangshao culture (5000–3000
BC
) might have used silk. Sun et al. (2012) estimate, based on genetic studies, domestication of the silkworm to have occurred about 4,100 years ago.
6
. Cook (1984, 144). This story might be too good to withstand historical scrutiny, rather like Procopius’ (
Wars
VIII.xvii.1–7) story of how the silkworms were supposedly brought to Byzantium by wandering monks, hidden in their hollow bamboo staves (Hunt 2011).
7
. Wood (2007); Temple (1986).
8
. Pre-Columbian cultures did have the wheel but used it only for toys (probably due to a lack of good draft animals).
9
. Koubi (1999); Lerner (1997); Koubi and Lalman (2007); Zeira (2011); Judd et al. (2012).
10
. Estimated from a variety of sources. The time gap is often somewhat arbitrary, depending on how exactly “equivalent” capabilities are defined. Radar was used by at least two countries within a couple of years of its introduction, but exact figures in months are hard to come by.
11
. The RDS-6 in 1953 was the first test of a bomb with fusion reactions, but the RDS-37 in 1955 was the first “true” fusion bomb, where most power came from the fusion reaction.
12
. Unconfirmed.
13
. Tests in 1989, project cancelled in 1994.
14
. Deployed system, capable of a range greater than 5,000 km.
15
. Polaris missiles bought from the USA.
16
. Current work is underway on the Taimur missile, likely based on Chinese missiles.
17
. The RSA-3 rocket tested 1989–90 was intended for satellite launches and/or as an ICBM.
18
. MIRV = multiple independently targetable re-entry vehicle, a technology that enables a single ballistic missile to carry multiple warheads that can be programmed to hit different targets.
19
. The Agni V system is not yet in service.
20
. Ellis (1999).
21
. If we model the situation as one where the lag time between projects is drawn from a normal distribution, then the likely distance between the leading project and its closest follower will also depend on how many projects there are. If there are a vast number of projects, then the distance between the first two is likely small even if the variance of the distribution is moderately high (though the expected gap between the lead and the second project declines very slowly with the number of competitors if completion times are normally distributed). However, it is unlikely that there will be a vast number of projects that are each well enough resourced to be serious contenders. (There might be a greater number of projects if there are a large number of different basic approaches that could be pursued, but in that case many of those approaches are likely to prove dead ends.) As suggested, empirically we seem to find that there is usually no more than a handful of serious competitors pursuing any one specific technological goal. The situation is somewhat different in a consumer market where there are many niches for slightly different products and where barriers to entry are low. There are lots of one-person projects designing T-shirts, but only a few firms in the world developing the next generation of graphics cards. (Two firms, AMD and NVIDIA, enjoy a near duopoly at the moment, though Intel is also competing at the lower-performance end of the market.)
22
. Bostrom (2006c). One could imagine a singleton whose existence is invisible (e.g. a superintelligence with such advanced technology or insight that it could subtly control world events without any human noticing its interventions); or a singleton that voluntarily imposes very strict limitations on its own exercise of power (e.g. punctiliously confining itself to ensuring that certain treaty-specified international rules—or libertarian principles—are respected). How likely
any particular kind of singleton is to arise is of course an empirical question; but
conceptually
, at least, it is possible to have a good singleton, a bad singleton, a rambunctiously diverse singleton, a blandly monolithic singleton, a crampingly oppressive singleton, or a singleton more akin to an extra law of nature than to a yelling despot.
23
. Jones (1985, 344).
24
. It might be significant that the Manhattan Project was carried out during wartime. Many of the scientists who participated claimed to be primarily motivated by the wartime situation and the fear that Nazi Germany might develop atomic weapons ahead of the Allies. It might be difficult for many governments to mobilize a similarly intensive and secretive effort in peacetime. The Apollo program, another iconic science/engineering megaproject, received a strong impetus from the Cold War rivalry.
25
. Though even if they
were
looking hard, it is not clear that they would appear (publicly) to be doing so.
26
. Cryptographic techniques could enable the collaborating team to be physically dispersed. The only weak link in the communication chain might be the input stage, where the physical act of typing could potentially be observed. But if indoor surveillance became common (by means of microscopic recording devices), those keen on protecting their privacy might develop countermeasures (e.g. special closets that could be sealed off from would-be eavesdropping devices). Whereas physical space might become transparent in a coming surveillance age, cyberspace might possibly become more protected through wider adoption of stronger cryptographic protocols.
27
. A totalitarian state might take recourse to even more coercive measures. Scientists in relevant fields might be swept up and put into work camps, akin to the “academic villages” in Stalinist Russia.
28
. When the level of public concern is relatively low, some researchers might welcome a little bit of public fear-mongering because it draws attention to their work and makes the area they work in seem important and exciting. When the level of concern becomes greater, the relevant research communities might change their tune as they begin to worry about funding cuts, regulation, and public backlash. Researchers in neighboring disciplines—such as those parts of computer science and robotics that are not very relevant to artificial general intelligence—might resent the drift of funding and attention away from their own research areas. These researchers might also correctly observe that
their
work carries no risk whatever of leading to a dangerous intelligence explosion. (Some historical parallels might be drawn with the career of the idea of nanotechnology; see Drexler [2013].)
29
. These have been successful in that they have achieved at least some of what they set out to do. How successful they have been in a broader sense (taking into account cost-effectiveness and so forth) is harder to determine. In the case of the International Space Station, for example, there have been huge cost overruns and delays. For details of the problems encountered by the project, see NASA (2013). The Large Hadron Collider project has had some major setbacks, but this might be due to the inherent difficulty of the task. The Human Genome Project achieved success in the end, but seems to have received a speed boost from being forced to compete with Craig Venter’s private corporate effort. Internationally sponsored projects to achieve controlled fusion energy have failed to deliver on expectations, despite massive investment; but again, this might be attributable to the task turning out to be more difficult than anticipated.
30
. US Congress, Office of Technology Assessment (1995).
31
. Hoffman (2009); Rhodes (2008).
32
. Rhodes (1986).
33
. The US Navy’s code-breaking organization, OP-20-G, apparently ignored an invitation to gain full knowledge of Britain’s anti-Enigma methods, and failed to inform higher-level US decision makers of Britain’s offer to share its cryptographic secrets (Burke 2001). This gave American leaders the impression that Britain was withholding important information, a cause of friction throughout the war. Britain did share with the Soviet government some of the intelligence they had gleaned from decrypted German communications. In particular, Russia was warned about the German preparations for Operation Barbarossa. But Stalin refused to believe the warning, partly because the British did not disclose how they had obtained the information.
34
.
For a few years, Russell seems to have advocated the threat of nuclear war to persuade Russia to accept the Baruch plan; later, he was a strong proponent of mutual nuclear disarmament (Russell and Griffin 2001). John von Neumann is reported to have believed that a war between the United States and Russia was inevitable, and to have said, “If you say why not bomb them [the Russians] tomorrow, I say why not bomb them today? If you say today at five o’clock, I say why not one o’clock?” (It is possible that he made this notorious statement to burnish his anti-communist credentials with US Defense hawks in the McCarthy era. Whether von Neumann, had he been in charge of US policy, would actually have launched a first strike is impossible to ascertain. See Blair [1957], 96.)
35
. Baratta (2004).
CHAPTER 6: COGNITIVE SUPERPOWERS36
. If the AI is controlled by a group of humans, the problem may apply to this human group, though it is possible that new ways of reliably committing to an agreement will be available by this time, in which case even human groups could avoid this problem of potential internal unraveling and overthrow by a sub-coalition.
1
. In what sense is humanity a dominant species on Earth? Ecologically speaking, humans are the most common large (~50 kg) animal, but the total human dry biomass (~100 billion kg) is not so impressive compared with that of ants, the family Formicidae (300 billion–3,000 billion kg). Humans and human utility organisms form a very small part (<0.001) of total global biomass. However, croplands and pastures are now among the largest ecosystems on the planet, covering about 35% of the ice-free land surface (Foley et al. 2007). And we appropriate nearly a quarter of net primary productivity according to a typical assessment (Haberl et al. 2007), though estimates range from 3 to over 50% depending mainly on varying definitions of the relevant terms (Haberl et al. 2013). Humans also have the largest geographic coverage of any animal species and top the largest number of different food chains.
2
. Zalasiewicz et al. (2008).
3
. See first note to this chapter.
4
. Strictly speaking, this may not be quite correct. Intelligence in the human species ranges all the way down to approximately zero (e.g. in the case of embryos or patients in permanent vegetative state). In qualitative terms, the maximum difference in cognitive ability within the human species is therefore perhaps greater than the difference between any human and a superintelligence. But the point in the text stands if we read “human” as “normally functioning adult.”
5
. Gottfredson (2002). See also Carroll (1993) and Deary (2001).
6
. See Legg (2008). Roughly, Legg proposes to measure a reinforcement-learning agent as its expected performance in all reward-summable environments, where each such environment receives a weight determined by its Kolmogorov complexity. We will explain what is meant by reinforcement learning in
Chapter 12
. See also Dowe and Hernández-Orallo (2012) and Hibbard (2011).
7
. With regard to technology research in areas like biotechnology and nanotechnology, what a superintelligence would excel at is the design and modeling of new structures. To the extent that design ingenuity and modeling cannot substitute for physical experimentation, the superintelligence’s performance advantage may be qualified by its level of access to the requisite experimental apparatus.
8
. E.g., Drexler (1992, 2013).
9
. A narrow-domain AI could of course have significant commercial applications, but this does not mean that it would have the economic productivity superpower. For example, even if a narrow-domain AI earned its owners several billions of dollars a year, this would still be four orders of magnitude less than the rest of the world economy. In order for the system directly and substantially to increase world product, an AI would need to be able to perform many kinds of work; that is, it would need competence in many domains.
10
. The criterion does not rule out all scenarios in which the AI fails. For example, the AI might rationally take a gamble that has a high chance of failing. In this case, however, the criterion could take the form that (a) the AI should make an unbiased estimate of the gamble’s low chance of
success and (b) there should be no better gamble available to the AI that we present-day humans can think of but that the AI overlooks.
11
. Cf. Freitas (2000) and Vassar and Freitas (2006).
12
. Yudkowsky (2008a).