Tag Archives: existential risk

Michael Huemer

This is how our species is going to die. Not necessarily from nuclear war specifically, but from ignoring existential risks that don’t appear imminent‌ at this moment. If we keep doing that, eventually, something is going to kill us – something that looked improbable in advance, but that, by the time it looks imminent, is too late to stop.

Michael Huemer, The Case for Tyranny, Fake Nous, July 11, 2020

Hillary Clinton

Technologists like Elon Musk, Sam Altman, and Bill Gates, and physicists like Stephen Hawking have warned that artificial intelligence could one day pose an existential security threat. Musk has called it “the greatest risk we face as a civilization.” Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I’d start talking about “the rise of the robots” in some Iowa town hall. Maybe I should have. In any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up.

Hillary Clinton, What Happened, New York, 2017, p. 241

I. J. Good

Once a machine is designed that is good enough, […] it can be put to work designing an even better machine. At this point an “explosion” will clearly occur; all the problems of science and technology will be handed over to machines and it will no longer be necessary for people to work. Whether this will lead to a Utopia or to the extermination of the human race will depend on how the problem is handled by the machines. The important thing will be to give them the aim of serving human beings.

I. J. Good, “Speculations on Perceptrons and Other Automata”, IBM Research Lecture, RC-115 (1959), p. 17

Peter Singer

[T]he evolution of superior intelligence in humans was bad for chimpanzees, but it was good for humans. Whether it was good or bad “from the point of view of the universe” is debatable, but if human life is sufficiently positive to offset the suffering we have inflicted on animals, and if we can be hopeful that in future life will get better both for humans and for animals, then perhaps it will turn out to have been good. Remember Bostrom’s definition of existential risk, which refers to the annihilation not of human beings, but of “Earth-originating intelligent life.” The replacement of our species by some other form of conscious intelligent life is not in itself, impartially considered, catastrophic. Even if the intelligent machines kill all existing humans, that would be, as we have seen, a very small part of the loss of value that Parfit and Bostrom believe would be brought about by the extinction of Earth-orginating intelligent life. The risk posed by the development of AI, therefore, is not so much whether it is friendly to us, but whether it is friendly to the idea of promoting wellbeing in general, for all sentient beings it encounters, itself included.

Peter Singer, Doing the Most Good: How Effective Altruism Is Changing Ideas about Living Ethically, New Haven, 2015, p. 176