Read Superintelligence: Paths, Dangers, Strategies Online

Authors: Nick Bostrom

Tags: #Science, #Philosophy, #Non-Fiction

Superintelligence: Paths, Dangers, Strategies (64 page)

BOOK: Superintelligence: Paths, Dangers, Strategies
5.66Mb size Format: txt, pdf, ePub
ads

37
. On the other hand, public oversight by a single government would risk producing an outcome in which one nation monopolizes the gains. This outcome seems inferior to one in which unaccountable altruists ensure that everybody stands to gain. Furthermore, oversight by a national government would not necessarily mean that even all the citizens of that country receive a share
of the benefit: depending on the country in question, there is a greater or smaller risk that all the benefits would be captured by a political elite or a few self-serving agency personnel.

38
. One qualification is that the use of incentive wrapping (as discussed in
Chapter 12
) might in some circumstances encourage people to join a project as active collaborators rather than passive free-riders.

39
. Diminishing returns would seem to set in at a much smaller scale. Most people would rather have one star than a one-in-a-billion chance of a galaxy with a billion stars. Indeed, most people would rather have a billionth of the resources on Earth than a one-in-a-billion chance of owning the entire planet.

40
. Cf. Shulman (2010a).

41
. Aggregative ethical theories run into trouble when the idea that the cosmos might be infinite is taken seriously; see Bostrom (2011b). There may also be trouble when the idea of ridiculously large but finite values is taken seriously; see Bostrom (2009b).

42
. If one makes a computer larger, one eventually faces relativistic constraints arising from communication latencies between the different parts of the computer—signals do not propagate faster than light. If one shrinks the computer, one encounters quantum limits to miniaturization. If one increases the density of the computer, one slams into the black hole limit. Admittedly, we cannot be completely certain that new physics will not one day be discovered offering some way around these limitations.

43
. The number of copies of a person would scale linearly with resources with no upper bound. Yet it is not clear how much the average human being would value having multiple copies of herself. Even those people who would prefer to be multiply instantiated may not have a utility function that is linear with increasing number of copies. Copy numbers, like life years, might have diminishing returns in the typical person’s utility function.

44
. A singleton is highly internally collaborative at the highest level of decision-making. A singleton
could
have a lot of non-collaboration and conflict at lower levels, if the higher-level agency that constitutes the singleton chooses to have things that way.

45
. If each rival AI team is convinced that the other teams are so misguided as to have no chance of producing an intelligence explosion, then one reason for collaboration—avoiding the race dynamic—is obviated: each team should independently choose to go slower in the confident belief that it lacks any serious competition.

46
. A PhD student.

47
. This formulation is intended to be read so as to include a prescription that the well-being of nonhuman animals and other sentient beings (including digital minds) that exist or may come to exist be given due consideration. It is not meant to be read as a license for one AI developer to substitute his or her own moral intuitions for those of the wider moral community. The principle is consistent with the “coherent extrapolated volition” approach discussed in
Chapter 12
, with an extrapolation base encompassing all humans.

A further clarification: The formulation is not intended to necessarily exclude the possibly of post-transition property rights in artificial superintelligences or their constituent algorithms and data structures. The formulation is meant to be agnostic about what legal or political systems would best serve to organize transactions within a hypothetical future posthuman society. What the formulation
is
meant to assert is that the choice of such a system, insofar as its selection is causally determined by how superintelligence is initially developed, should to be made on the basis of the stated criterion; that is, the post-transition constitutional system should be chosen for the benefit of all of humanity and in the service of widely shared ethical ideals—as opposed to, for instance, for the benefit merely of whoever happened to be the first to develop superintelligence.

48
. Refinements of the windfall clause are obviously possible. For example, perhaps the threshold should be expressed in
per capita
terms, or maybe the winner should be allowed to keep a somewhat larger than equal share of the overshoot in order to more strongly incentivize further production (some version of Rawls’s maximin principle might be attractive here). Other refinements would refocus the clause away from dollar amounts and restate it in terms of “influence on humanity’s future” or “degree to which different parties’ interests are weighed in a future singleton’s utility function” or some such.

CHAPTER 15: CRUNCH TIME
 

1
. Some research is worthwhile not because of what it discovers but for other reasons, such as by entertaining, educating, accrediting, or uplifting those who engage in it.

2
. I am not suggesting that
nobody
should work on pure mathematics or philosophy. I am also not suggesting that these endeavors are especially wasteful compared to all the other dissipations of academia or society at large. It is probably very good that some people can devote themselves to the life of the mind and follow their intellectual curiosity wherever it leads, independent of any thought of utility or impact. The suggestion is that at the margin, some of the best minds might, upon realizing that their cognitive performances may become obsolete in the foreseeable future, want to shift their attention to those theoretical problems for which it makes a difference whether we get the solution a little sooner.

3
. Though one should be cautious in cases where this uncertainty may be protective—recall, for instance, the risk-race model in
Box 13
, where we found that additional strategic information could be harmful. More generally, we need to worry about information hazards (see Bostrom [2011b]). It is tempting to say that we need more analysis of information hazards. This is probably true, although we might still worry that such analysis itself may produce dangerous information.

4
. Cf. Bostrom (2007).

5
. I am grateful to Carl Shulman for emphasizing this point.

BIBLIOGRAPHY
 

Acemoglu, Daron. 2003. “Labor- and Capital-Augmenting Technical Change.”
Journal of the European Economic Association
1 (1): 1–37.

Albertson, D. G., and Thomson, J. N. 1976. “The Pharynx of
Caenorhabditis Elegans
.”
Philosophical Transactions of the Royal Society B: Biological Sciences
275 (938): 299–325.

Allen, Robert C. 2008. “A Review of Gregory Clark’s
A Farewell to Alms: A Brief Economic History of the World
.”
Journal of Economic Literature
46 (4): 946–73.

American Horse Council. 2005. “National Economic Impact of the US Horse Industry.” Retrieved July 30, 2013. Available at
http://www.horsecouncil.org/national-economic-impact-us-horse-industry
.

Anand, Paul, Pattanaik, Prasanta, and Puppe, Clemens, eds. 2009.
The Oxford Handbook of Rational and Social Choice
. New York: Oxford University Press.

Andres, B., Koethe, U., Kroeger, T., Helmstaedter, M., Briggman, K. L., Denk, W., and Hamprecht, F. A. 2012. “3D Segmentation of SBFSEM Images of Neuropil by a Graphical Model over Supervoxel Boundaries.”
Medical Image Analysis
16 (4): 796–805.

Armstrong, Alex. 2012. “Computer Competes in Crossword Tournament.”
I Programmer
, March 19.

Armstrong, Stuart. 2007. “Chaining God: A Qualitative Approach to AI, Trust and Moral Systems.” Unpublished manuscript, October 20. Retrieved December 31, 2012. Available at
http://www.neweuropeancentury.org/GodAI.pdf
.

Armstrong, Stuart. 2010.
Utility Indifference
, Technical Report 2010-1. Oxford: Future of Humanity Institute, University of Oxford.

Armstrong, Stuart. 2013. “General Purpose Intelligence: Arguing the Orthogonality Thesis.”
Analysis and Metaphysics
12: 68–84.

Armstrong, Stuart, and Sandberg, Anders. 2013. “Eternity in Six Hours: Intergalactic Spreading of Intelligent Life and Sharpening the Fermi Paradox.”
Acta Astronautica
89: 1–13.

Armstrong, Stuart, and Sotala, Kaj. 2012. “How We’re Predicting AI—or Failing To.” In
Beyond AI: Artificial Dreams
, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal Polak, and Radek Schuster, 52–75. Pilsen: University of West Bohemia. Retrieved February 2, 2013.

Asimov, Isaac. 1942. “Runaround.”
Astounding Science-Fiction
, March, 94–103.

Asimov, Isaac. 1985.
Robots and Empire
. New York: Doubleday.

Aumann, Robert J. 1976. “Agreeing to Disagree.”
Annals of Statistics
4 (6): 1236–9.

Averch, Harvey Allen. 1985.
A Strategic Analysis of Science and Technology Policy
. Baltimore: Johns Hopkins University Press.

Azevedo, F. A. C., Carvalho, L. R. B., Grinberg, L. T., Farfel, J. M., Ferretti, R. E. L., Leite, R. E. P., Jacob, W., Lent, R., and Herculano-Houzel, S. 2009. “Equal Numbers of Neuronal and Nonneuronal Cells Make the Human Brain an Isometrically Scaled-up Primate Brain.”
Journal of Comparative Neurology
513 (5): 532–41.

Baars, Bernard J. 1997.
In the Theater of Consciousness: The Workspace of the Mind
. New York: Oxford University Press.

Baratta, Joseph Preston. 2004.
The Politics of World Federation: United Nations, UN Reform, Atomic Control
. Westport, CT: Praeger.

Barber, E. J. W. 1991.
Prehistoric Textiles: The Development of Cloth in the Neolithic and Bronze Ages with Special Reference to the Aegean
. Princeton, NJ: Princeton University Press.

Bartels, J., Andreasen, D., Ehirim, P., Mao, H., Seibert, S., Wright, E. J., and Kennedy, P. 2008. “Neurotrophic Electrode: Method of Assembly and Implantation into Human Motor Speech Cortex.”
Journal of Neuroscience Methods
174 (2): 168–76.

Bartz, Jennifer A., Zaki, Jamil, Bolger, Niall, and Ochsner, Kevin N. 2011. “Social Effects of Oxytocin in Humans: Context and Person Matter.”
Trends in Cognitive Science
15 (7): 301–9.

Basten, Stuart, Lutz, Wolfgang, and Scherbov, Sergei. 2013. “Very Long Range Global Population Scenarios to 2300 and the Implications of Sustained Low Fertility.”
Demographic Research
28: 1145–66.

Baum, Eric B. 2004.
What Is Thought?
Bradford Books. Cambridge, MA: MIT Press.

Baum, Seth D., Goertzel, Ben, and Goertzel, Ted G. 2011. “How Long Until Human-Level AI? Results from an Expert Assessment.”
Technological Forecasting and Social Change
78 (1): 185–95.

Beal, J., and Winston, P. 2009. “Guest Editors’ Introduction: The New Frontier of Human-Level Artificial Intelligence.”
IEEE Intelligent Systems
24 (4): 21–3.

Bell, C. Gordon, and Gemmell, Jim. 2009.
Total Recall: How the E-Memory Revolution Will Change Everything
. New York: Dutton.

Benyamin, B., Pourcain, B. St., Davis, O. S., Davies, G., Hansell, M. K., Brion, M.-J. A., Kirkpatrick, R. M., et al. 2013. “Childhood Intelligence is Heritable, Highly Polygenic and Associated With FNBP1L.”
Molecular Psychiatry
(January 23).

Berg, Joyce E., and Rietz, Thomas A. 2003. “Prediction Markets as Decision Support Systems.”
Information Systems Frontiers
5 (1): 79–93.

Berger, Theodore W., Chapin, J. K., Gerhardt, G. A., Soussou, W. V., Taylor, D. M., and Tresco, P. A., eds. 2008.
Brain–Computer Interfaces: An International Assessment of Research and Development Trends
. Springer.

Berger, T. W., Song, D., Chan, R. H., Marmarelis, V. Z., LaCoss, J., Wills, J., Hampson, R. E., Deadwyler, S. A., and Granacki, J. J. 2012. “A Hippocampal Cognitive Prosthesis: Multi-Input, Multi-Output Nonlinear Modeling and VLSI Implementation.”
IEEE Transactions on Neural Systems and Rehabilitation Engineering
20 (2): 198–211.

Berliner, Hans J. 1980a. “Backgammon Computer-Program Beats World Champion.”
Artificial Intelligence
14 (2): 205–220.

Berliner, Hans J. 1980b. “Backgammon Program Beats World Champ.”
SIGART Newsletter
69: 6–9.

Bernardo, José M., and Smith, Adrian F. M. 1994.
Bayesian Theory
, 1st ed. Wiley Series in Probability & Statistics. New York: Wiley.

Birbaumer, N., Murguialday, A. R., and Cohen, L. 2008. “Brain–Computer Interface in Paralysis.”
Current Opinion in Neurology
21 (6): 634–8.

Bird, Jon, and Layzell, Paul. 2002. “The Evolved Radio and Its Implications for Modelling the Evolution of Novel Sensors.” In
Proceedings of the 2002 Congress on Evolutionary Computation
, 2: 1836–41.

Blair, Clay, Jr. 1957. “Passing of a Great Mind: John von Neumann, a Brilliant, Jovial Mathematician, was a Prodigious Servant of Science and His Country.”
Life
, February 25, 89–104.

Bobrow, Daniel G. 1968. “Natural Language Input for a Computer Problem Solving System.” In
Semantic Information Processing
, edited by Marvin Minsky, 146–227. Cambridge, MA: MIT Press.

Bostrom, Nick. 1997. “Predictions from Philosophy? How Philosophers Could Make Themselves Useful.” Unpublished manuscript. Last revised September 19, 1998.

Bostrom, Nick. 2002a.
Anthropic Bias: Observation Selection Effects in Science and Philosophy
. New York: Routledge.

Bostrom, Nick. 2002b. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.”
Journal of Evolution and Technology
9.

Bostrom, Nick. 2003a. “Are We Living in a Computer Simulation?”
Philosophical Quarterly
53 (211): 243–55.

Bostrom, Nick. 2003b. “Astronomical Waste: The Opportunity Cost of Delayed Technological Development.”
Utilitas
15 (3): 308–314.

Bostrom, Nick. 2003c. “Ethical Issues in Advanced Artificial Intelligence.” In
Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence
, edited by Iva Smit and George E. Lasker, 2: 12–17. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.

Bostrom, Nick. 2004. “The Future of Human Evolution.” In
Two Hundred Years After Kant, Fifty Years After Turing
, edited by Charles Tandy, 2: 339–371. Death and Anti-Death. Palo Alto, CA: Ria University Press.

Bostrom, Nick. 2006a. “How Long Before Superintelligence?”
Linguistic and Philosophical Investigations
5(1): 11–30.

Bostrom, Nick. 2006b. “Quantity of Experience: Brain-Duplication and Degrees of Consciousness.”
Minds and Machines
16 (2): 185–200.

Bostrom, Nick. 2006c. “What is a Singleton?”
Linguistic and Philosophical Investigations
5 (2): 48–54.

Bostrom, Nick. 2007. “Technological Revolutions: Ethics and Policy in the Dark.” In
Nanoscale: Issues and Perspectives for the Nano Century
, edited by Nigel M. de S. Cameron and M. Ellen Mitchell, 129–52. Hoboken, NJ: Wiley.

Bostrom, Nick. 2008a. “Where Are They? Why I Hope the Search for Extraterrestrial Life Finds Nothing.”
MIT Technology Review
, May/June issue, 72–7.

Bostrom, Nick. 2008b. “Why I Want to Be a Posthuman When I Grow Up.” In
Medical Enhancement and Posthumanity
, edited by Bert Gordijn and Ruth Chadwick, 107–37. New York: Springer.

Bostrom, Nick. 2008c. “Letter from Utopia.”
Studies in Ethics, Law, and Technology
2 (1): 1–7.

Bostrom, Nick. 2009a. “Moral Uncertainty – Towards a Solution?”
Overcoming Bias
(blog), January 1.

Bostrom, Nick. 2009b. “Pascal’s Mugging.”
Analysis
69 (3): 443–5.

Bostrom, Nick. 2009c. “The Future of Humanity.” In
New Waves in Philosophy of Technology
, edited by Jan Kyrre Berg Olsen, Evan Selinger, and Søren Riis, 186–215. New York: Palgrave Macmillan.

Bostrom, Nick. 2011a. “Information Hazards: A Typology of Potential Harms from Knowledge.”
Review of Contemporary Philosophy
10: 44–79.

Bostrom, Nick. 2011b. “Infinite Ethics.”
Analysis and Metaphysics
10: 9–59.

Bostrom, Nick. 2012. “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.” In “Theory and Philosophy of AI,” edited by Vincent C. Müller, special issue,
Minds and Machines
22 (2): 71–85.

Bostrom, Nick, and Ćirković, Milan M. 2003. “The Doomsday Argument and the Self-Indication Assumption: Reply to Olum.”
Philosophical Quarterly
53 (210): 83–91.

Bostrom, Nick, and Ord, Toby. 2006. “The Reversal Test: Eliminating the Status Quo Bias in Applied Ethics.”
Ethics
116 (4): 656–79.

Bostrom, Nick, and Roache, Rebecca. 2011. “Smart Policy: Cognitive Enhancement and the Public Interest.” In
Enhancing Human Capacities
, edited by Julian Savulescu, Ruud ter Meulen, and Guy Kahane, 138–49. Malden, MA: Wiley-Blackwell.

Bostrom, Nick and Sandberg, Anders. 2009a. “Cognitive Enhancement: Methods, Ethics, Regulatory Challenges.”
Science and Engineering Ethics
15 (3): 311–41.

Bostrom, Nick and Sandberg, Anders. 2009b. “The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement.” In
Human Enhancement
, 1st ed., edited by Julian Savulescu and Nick Bostrom, 375–416. New York: Oxford University Press.

Bostrom, Nick, Sandberg, Anders, and Douglas, Tom. 2013. “The Unilateralist’s Curse: The Case for a Principle of Conformity.” Working Paper. Retrieved February 28, 2013. Available at
http://www.nickbostrom.com/papers/unilateralist.pdf
.

Bostrom, Nick, and Yudkowsky, Eliezer. Forthcoming. “The Ethics of Artificial Intelligence.” In
Cambridge Handbook of Artificial Intelligence
, edited by Keith Frankish and William Ramsey. New York: Cambridge University Press.

Boswell, James. 1917.
Boswell’s Life of Johnson
. New York: Oxford University Press.

Bouchard, T. J. 2004. “Genetic Influence on Human Psychological Traits: A Survey.”
Current Directions in Psychological Science
13 (4): 148–51.

Bourget, David, and Chalmers, David. 2009. “The PhilPapers Surveys.” November. Available at
http://philpapers.org/surveys/
.

Bradbury, Robert J. 1999. “Matrioshka Brains.” Archived version. As revised August 16, 2004. Available at
http://web.archive.org/web/20090615040912/http://www.aeiveos.com/~bradbury/MatrioshkaBrains/MatrioshkaBrainsPaper.html
.

Brinton, Crane. 1965.
The Anatomy of Revolution
. Revised ed. New York: Vintage Books.

Bryson, Arthur E., Jr., and Ho, Yu-Chi. 1969.
Applied Optimal Control: Optimization, Estimation, and Control
. Waltham, MA: Blaisdell.

Buehler, Martin, Iagnemma, Karl, and Singh, Sanjiv, eds. 2009.
The DARPA Urban Challenge: Autonomous Vehicles in City Traffic
. Springer Tracts in Advanced Robotics 56. Berlin: Springer.

Burch-Brown, J. 2014. “Clues for Consequentialists.”
Utilitas
26 (1): 105–19.

Burke, Colin. 2001. “Agnes Meyer Driscoll vs. the Enigma and the Bombe.” Unpublished manuscript. Retrieved February 22, 2013. Available at
http://userpages.umbc.edu/~burke/driscoll1-2011.pdf
.

Canbäck, S., Samouel, P., and Price, D. 2006. “Do Diseconomies of Scale Impact Firm Size and Performance? A Theoretical and Empirical Overview.”
Journal of Managerial Economics
4 (1): 27–70.

Carmena, J. M., Lebedev, M. A., Crist, R. E., O’Doherty, J. E., Santucci, D. M., Dimitrov, D. F., Patil, P. G., Henriquez, C. S., and Nicolelis, M. A. 2003. “Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates.”
Public Library of Science Biology
1 (2): 193–208.

Carroll, Bradley W., and Ostlie, Dale A. 2007. An Introduction to Modern Astrophysics. 2nd ed. San Francisco: Pearson Addison Wesley.

Carroll, John B. 1993.
Human Cognitive Abilities: A Survey of Factor-Analytic Studies
. New York: Cambridge University Press.

Carter, Brandon. 1983. “The Anthropic Principle and its Implications for Biological Evolution.”
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
310 (1512): 347–63.

BOOK: Superintelligence: Paths, Dangers, Strategies
5.66Mb size Format: txt, pdf, ePub
ads

Other books

Release by Rebecca Lynn
Dying for Millions by Judith Cutler
Innocent by Eric Walters
First to Die by Slayer, Kate
Dark Matter by Greg Iles
Two Can Play by K.M. Liss
Tea and Tomahawks by Dahlia Dewinters, Leanore Elliott