Category Archives: Nick Bostrom

Nick Bostrom

Suppose we were convinced that the (by far) most likely scenario involving infinite values goes something like follows: One day our descendants discover some new physics which lets them develop a technology that makes it possible to create an infinite number of people in what otherwise would have been a finite cosmos. If our current behavior has some probabilistic effect, however slim, on how our descendants will act, we would then (according to EDR) have a reason to act in such a way as to maximize the probability that we will have descendants who will develop such infinite powers and use them for good ends. It is not obvious which courses of action would have this property. But it seems plausible that they would fall within the range acceptable to common sense morality. For instance, it seems more likely that ending world hunger would increase, and that gratuitous genocide would decrease, the probability that the human species will survive to develop infinitely powerful technologies and use them for good rather than evil ends, than that the opposite should be true. More generally, working towards a morally decent society, as traditionally understood, would appear to be a good way to promote the eventual technological realization of infinite goods.

Nick Bostrom, ‘Infinite Ethics’, Analysis and Metaphysics, vol. 10, p. 40

Nick Bostrom

what hangs in the balance is at least 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 human lives (though the true number is probably larger). If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy.

Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford, 2014, p. 103

Nick Bostrom

[C]onsider the convention against the use of ad hominem arguments in science and many other arenas of disciplined discussion. The nominal justification for this rule is that the validity of a scientific claim is independent of the personal attributes of the person or the group who puts it forward. Construed as a narrow point about logic, this comment about ad hominem arguments is obviously correct. But it overlooks the epistemic significance of heuristics that rely on information about how something was said and by whom in order to evaluate the credibility of a statement. In reality, no scientist adopts or rejects scientific assertions solely on the basis of an independent examination of the primary evidence. Cumulative scientific progress is possible only because scientists take on trust statements made by other scientists—statements encountered in textbooks, journal articles, and informal conversations around the coffee machine. In deciding whether to trust such statements, an assessment has to be made of the reliability of the source. Clues about source reliability come in many forms—including information about factors, such as funding sources, peer esteem, academic affiliation, career incentives, and personal attributes, such as honesty, expertise, cognitive ability, and possible ideological biases. Taking that kind of information into account when evaluating the plausibility of a scientific hypothesis need involve no error of logic.

Why is it, then, that restrictions on the use of the ad hominem command such wide support? Why should arguments that highlight potentially relevant information be singled out for suspicion? I would suggest that this is because experience has demonstrated the potential for abuse. For reasons that may have to do with human psychology, discourses that tolerate the unrestricted use of ad hominem arguments manifest an enhanced tendency to degenerate into personal feuds in which the spirit of collaborative, reasoned inquiry is quickly extinguished. Ad hominem arguments bring out our inner Neanderthal.

Nick Bostrom, ‘Technological Revolutions: Ethics and Policy in the Dark’, in Nigel Cameron & Ellen Mitchell (eds.), Nanoscale: Issues and Perspectives for the Nano Century, Hoboken, New Jersey, 2007, pp. 141-142

Nick Bostrom

Consider an AI that has hedonism as its final goal, and which would therefore like to tile the universe with “hedonium” (matter organized in a configuration that is optimal for the generation of pleasurable experience). To this end, the AI might produce computronium (matter organized in a configuration that is optimal for computation) and use it to implement digital minds in states of euphoria. In order to maximize efficiency, the AI omits from the implementation any mental faculties that are not essential for the experience of pleasure, and exploits any computational shortcuts that according to its definition of pleasure do not vitiate the generation of pleasure. For instance, the AI might confine its simulation to reward circuitry, eliding faculties such as a memory, sensory perception, executive function, and language; it might simulate minds at a relatively coarse-grained level of functionality, omitting lower-level neuronal processes; it might replace commonly repeated computations with calls to a lookup table; or it might put in place some arrangement whereby multiple minds would share most parts of their underlying computational machinery (their “supervenience bases” in philosophical parlance). Such tricks could greatly increase the quantity of pleasure producible with a given amount of resources.

Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford, 2014, p. 140

Nick Bostrom

Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.

Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford, 2014, p. 53

Nick Bostrom

With machine intelligence and other technologies such as advanced nanotechnology, space colonization should become economical. Such technology would enable us to construct “von Neumann probes” – machines with the capability of traveling to a planet, building a manufacturing base there, and launching multiple new probes to colonize other stars and planets. A space colonization race could ensue. Over time, the resources of the entire accessible universe might be turned into some kind of infrastructure, perhaps an optimal computing substrate (“computronium”). Viewed from the outside, this process might take a very simple and predictable form – a sphere of technological structure, centered on its Earthly origin, expanding uniformly in all directions at some significant fraction of the speed of light. What happens on the “inside” of this structure – what kinds of lives and experiences (if any) it would sustain – would depend on initial conditions and the dynamics shaping its temporal evolution. It is conceivable, therefore, that the choices we make in this century could have extensive consequences.

Nick Bostrom, ‘The future of humanity’, in J. K. B Olsen, S. A. Pedersen & V. F. Hendricks (eds.), A Companion to the Philosophy of Technology, Oxford, 2009, pp. 555-556

Nick Bostrom and Rebecca Roache

The [current] system [of licensing medicines] was created to deal with traditional medicine which ais to prevent, detect, cure, or mitigate diseases. In this framework, there is no room for enhancing medicine. For example, drug companies could find it difficult to get regulatory approval for a pharmaceutical whose sole use is to improve cognitive functioning in the healthy population. To date, every pharmaceutical on the market that offers some potential cognitive enhancement effect was developed to treat some specific disease condition (such as ADHD, narcolepsy and Alzheimer’s disease). The enhancing effects of these drugs in healthy subjects is a serendipitous unintended effect. As a result, pharmaceutical companies, instead of aiming directly at enhancements for healthy people, must work indirectly by demonstrating that their drugs are effective in treating some recognised disease. One perverse effect of this incentive structure is the medicalization and “pathologization” of conditions that were previously regarded as part of the normal human spectrum. If a significant fraction of the population could obtain certain benefits from drugs that improve concentration, for example, it is currently necessary to categorize this segment of people as having some disease in order for the drug to be approved and prescribed to those who could benefit from it. It is not enough that people would like to be able to concentrate better when they work; they must be stamped as suffering from attention0deficit hyperactivity disorder: a condition now estimated to affect between 3 and 5 percent of school-age children (a higher proportion among boys) in the US. This medicalizatoin of arguably normal human characteristics not only stigmatizes enhancers, it also limits access to enhancing treatments; unless people are diagnosed with a condition whose treatment requires a certain enhancing drug, those who wish to use the drug for its enhancing effects are reliant on dinging a sympathetic physical willing to prescribe it (or finding other means of procurement). This creates inequities in access, since those with high social capital and the relevant information are more likely to gain access to enhancement than others.

Nick Bostrom and Rebecca Roache, ‘Ethical Issues in Human Enhancement’, in Jesper Ryberg, Thomas S. Petersen, and Clark Wolf (eds.), New Waves in Applied Ethics, New York, 2007