Tag Archives: human extinction

Andrej Karpathy

For thousands of years man’s capacity to destroy was limited to spears, arrows and fire. 120 years ago we learned to release chemical energy (e.g. TNT), and 70 years ago we learned to be 100 million times+ more efficient by harnessing the nuclear strong force energy with atomic weapons, first through fission and then fusion. We’ve also miniaturized these brilliant inventions and learned to mount them on ICBMs traveling at Mach 20. Unfortunately, we live in a universe where the laws of physics feature a strong asymmetry in how difficult it is to create and to destroy. This observation is also not reserved to nuclear weapons—more generally, technology monotonically increases the possible destructive damage per person per dollar. This is my favorite resolution to the Fermi paradox.

Andrej Karpathy, Review of Richard Rhodes’s The Making of the Atomic Bomb, Goodreads, December 13, 2016

Thomas Schelling

There was a time, shortly after the first atomic bomb was exploded, when there was some journalistic speculation about whether the earth’s atmosphere had a limited tolerance to nuclear fission; the idea was bruited about that a mighty chain reaction might destroy the earth’s atmosphere when some critical number of bombs had already been exploded. Someone proposed that, if this were true and if we could calculate with accuracy that critical level of tolerance, we might neutralize atomic weapons for all time by a deliberate program of openly and dramatically exploding n – 1 bombs.

Thomas Schelling, The Strategy of Conflict, Cambridge, Massachusetts, 1960, p. 138

Robert Kennedy

The possibility of the destruction of mankind was always in his mind. Someone once said that World War Three would be fought with atomic weapons and the next war with sticks and stones.

As mentioned before, Barbara Tuchman’s The Guns of August had made a great impression on the President. “I am not going to follow a course which will allow anyone to write a comparable book about this time, The Missiles of October,” he said to me that Saturday night, October 26. “If anybody is around to write after this, they are going to understand that we made every effort to find peace and every effort to give our adversary room to move. I am not going to push the Russians an inch beyond what is necessary.”

After it was finished, he made no statement attempting to take credit for himself or for the Administration for what had occurred. He instructed all members of the Ex Comm and government that no interview should be given, no statement made, which would claim any kind of victory. He respected Khrushchev for properly determining what was in his own country’s interest and what was in the interest of mankind. If it was a triumph, it was a triumph for the next generation and not for any particular government or people.

At the outbreak of the First World War the ex-Chancellor of Germany, Prince von Bülow, said to his successor, “How did it all happen?” “Ah, if only we knew,” was the reply.

Robert Kennedy, Thirteen Days: A Memoir of the Cuban Missile Crisis, New York, 1969, 127–128

Jeff McMahan

The main reason for thinking that nuclear war would be worse than Soviet domination where future generations are concerned is that nuclear war could lead to the extinction of the human race, and it is considerably more important to ensure that future generations will exist than to ensure that, if they exist, they will not exist under Soviet domination.

Jeff McMahan, Nuclear deterrence and future generations, in Avner Cohen & Steven Lee (eds.) Nuclear Weapons and the Future of Humanity: The Fundamental Questions, Totowa, New Jersey, 1986, p. 331

Richard Gott

The odds are against our colonizing the Galaxy and surviving to the far future, not because these things are intrinsically beyond our capabilities, but because living things usually do not live up to their maximum potential. Intelligence is a capability which gives us in principle a vast potential if we could only use it to its maximum capacity, but […] to succeed the way we would like, we will have to do something truly remarkable (such as colonizing space), something which most intelligent species do not do.

Richard Gott, Implications of the Copernican Principle for Our Future Prospects, Nature, vol. 363, no. 6427 (May 27, 1993), p. 319

Jonathan Schell

Regarded objectively, as an espisode in the development of life on earth, a nuclear holocaust that brought about the extinction of manking and other species by mutilating the ecosphere would constitute an evolutionary setback of possibly limited extent—the first to result from a deliberate action taken by the creature extinguished but perhaps no greater than any of several evolutionary setbacks, such as the extinction of the dinosaurs, of which the geological record offers evidence. […] However, regarded subjectively, from within human life, where we are all actually situated, and as something that would happen to us, human extinction assumes awesome, inapprehensible proportions. It is of the essence of the human condition that we are bornm live for a while, and then die. Through mishaps of all kindsm we may also suffer untimely death, and in extinction by nuclear arms the number of untimely deaths would reach the limit for any one catastrophe: everyone in the world would die. But although the untimely death of everyone in the world would in itself constitute an unimaginably huge loss, it would bring with it a separate, distinct loss that would be in a sense even huger—the cancellation of all future generations of human beings. According to the Bible, when Adam and Eve ate the fruit of the tree of knowledge God punished them by withdrawing from them the privilege of immortality and dooming them and their kind to die. Now our species has eaten more deeply of the fruit of the tree of knowledge, and has brought itself face to face with a second death—the death of mankind. In doing so, we have caused a basic change in the circumstances in which life was given to us, which is to say that we have altered the human condition. The distinctiveness of this second death from the deaths of all the people on earth can be illustrated by picturing two different global catastrophes. In the first, le ut suppose that most of th people on earth were killed in a nuclear holocaust but that a few million survived and the earth happened to remain habitable by human beings. In this catastrophe, billions of people would perish, but the species would survive, and perhaps one day would even repopulate the earth in its former numbers. But now let us suppose that a substance was released into the environment which had the effect of sterilizing all the people in the world but otherwise leaving them unharmed. Then, as the existing population died off, the world would empty of eople, until no one was left. Not one life would have been shortened by a single day, but the species would die. In extinction by nuclear arms, the death of the species and the death of all the people in the wold would happen together, but it is important to make a clear distinction between the two losses; otherwise, the mind, overwhelmed by the thought of the deaths of the billions of living people,  might stagger back without realizing that behind this already ungraspable loss there lies the separate loss of the future generations.

Jonathan Schell, The Fate of the Earth, New York 1982, pp. 114-115

John Broome

If a catastrophe should really dominate our thinking, it will not be because of the people it kills. There will be other harms, of course. But the effect that seems the most potentially harmful is the huge number of people whose existence might be prevented by a catastrophe. If we become extinct within the next few thousand years, that will prevent the existence of tens of trillions of people, as a very conservative estimate. If those nonexistences are bad, then this is a consideration that might dominate our calculations of expected utility.

John Broome, ‘A Small Chance of Disaster’, European Review, vol. 21, no. S1 (July, 2013), p. 830

Jonathan Schell

[T]he mere risk of extinction has a significance that is categorically different from, and immeasurably greater than, that of any other risk, and as we make our decisions we have to take that significance into account. Up to now, every risk has been contained within the frame of life; extinction would shatter that frame. It represents not the defeat of some purpose but an abyss in which all human purposes would be drowned for all time. We have no right to place the possibility of this limitless, eternal defeat on the same footing as risks that we run in the ordinary conduct of our affairs in our particular transient moment of human history. To employ a mathematical analogy, we can say that although the risk of extinction may be fractional, the stake is, humanly speaking, infinite, and a fraction of infinity is still infinity. In other words, once we learn that a holocaust might lead to extinction we have no right to gamble, because if we lose, the game will be over, and neither we nor anyone else will ever get another chance. Therefore, although, scientifically speaking, there is all the difference in the world between the mere possibility that a holocaust will bring about extinction and the certainty of it, morally they are the same, and we have no choice but to address the issue of nuclear weapons as though we knew for a certainty that their use would put an end to our species.

Jonathan Schell, The Fate of the Earth, New York 1982, p. 95

Derek Parfit

One thing that greatly matters is the failure of we rich people to prevent, as we so easily could, much of the suffering and many of the early deaths of the poorest people in the world. The money that we spend on an evening’s entertainment might instead save some poor person from death, blindness, or chronic and severe pain. If we believe that, in our treatment of these poorest people, we are not acting wrongly, we are like those who believed that they were justified in having slaves.

Some of us ask how much of our wealth we rich people ought to give to these poorest people. But that question wrongly assumes that our wealth is ours to give. This wealth is legally ours. But these poorest people have much stronger moral claims to some of this wealth. We ought to transfer to these people […] at least ten per cent of what we inherit or earn.

What now matters most is how we respond to various risks to the survival of humanity. We are creating some of these risks, and we are discovering how we could respond to these and other risks. If we reduce these risks, and humanity survives the next few centuries, our descendants or successors could end these risks by spreading through this galaxy.

Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea.

If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that would be possible. Some of our successors might live lives and create worlds that, though failing to justify past suffering, would have given us all, including those who suffered most, reasons to be glad that the Universe exists.

Derek Parfit, On What Matters, Volume Three, Oxford, 2017, pp. 436-437

Alyssa Vance

The main reason to focus on existential risk generally, and human extinction in particular, is that anything else about posthuman society can be modified by the posthumans (who will be far smarter and more knowledgeable than us) if desired, while extinction can obviously never be undone.

Alyssa Vance, Comments on A Proposed Adjustment to the Astronomical Waste Argument, LessWrong, May 27, 2013

J. J. C. Smart

There have been great advances in the human condition due to science: recollect the horrors of childbirth, surgical operations, even of having a tooth out, a hundred years ago. If the human race is not extinguished there may be cures of cancer, senility, and other evils, so that happiness may outweigh unhappiness in the case of more and more individuals. Perhaps our far superior descendants of a million years hence (if they exist) will be possessed of a felicity unimaginable to us.

J. J. C. Smart, Ethics, Persuasion, and Truth, London, 1984, p. 141

Bertrand Russell

It is surprising and somewhat disappointing that movements aiming at the prevention of nuclear war are regarded throughout the West as Left-Wing movements or as inspired by some -ism which is repugnant to a majority of ordinary people. It is not in this way that opposition to nuclear warfare should be conceived. It should be conceived rather on the analogy of sanitary measures against epidemic.

Bertrand Russell, Common Sense and Nuclear Warfare, London, 1959, introduction

John Broome

Total and average utilitarianism are very different theories, and where they differ most is over extinction. If global warming extinguishes humanity, according to total utilitarianism, that would be an inconceivably bad disaster. The loss would be all the future wellbeing of all the people who would otherwise have lived. On the other hand, according to at least some versions of average utilitarianism, extinction might not be a very bad thing at all; it might not much affect the average wellbeing of the people who do live. So the difference between these theories makes a vast difference to the attitude we should take to global warming. According to total utilitarianism, although the chance of extinction is slight, the harm extinction would do is so enormous that it may well be the dominant consideration when we think about global warming. According to average utilitarianism, the chance of extinction may well be negligible.

John Broome, Counting the Cost of Global Warming, Cambridge, 1992, p. 121

Katarzyna de Lazari-Radek & Peter Singer

Even if we think the prior existence view is more plausible than the total view, we should recognize that we could be mistaken about this and therefore give some value to the life of a possible future—let’s say, for example, 10 per cent of the value we give to the similar life of a presently existing being. The number of human beings who will come into existence only if we can avoid extinction is so huge that even with that relatively low value, reducing the risk of human extinction will often be a highly cost-effective strategy for maximizing utility, as long as we have some understanding of what will reduce that risk.

Katarzyna de Lazari-Radek & Peter Singer, The Point of View of the Universe: Sidgwick and Contemporary Ethics, Oxford, 2014, pp. 376-377

Derek Parfit

We live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading through this galaxy.

Derek Parfit, On What Matters, vol. 2, Oxford, 2011, p. 616

Tyler Cowen

If the time horizon is extremely short, the benefits of continued higher growth will be choked off and will tend to be small in nature. Even if we hold a deep concern for the distant future, perhaps there is no distant future to care about. To present this point in its starkest form, imagine that the world were set to end tomorrow. There would be little point in maximizing the growth rate, and arguably we should just throw a party and consume what we can. Even if we could boost growth in the interim hours, the payoff would be small and not very durable. The case for growth maximization therefore is stronger the longer the time horizon we consider.

Tyler Cowen, ‘Caring about the Distant Future: Why it Matters and What it Means’, University of Chicago Law Review, vol. 74, no. 1 (Winter, 2007), p. 28

Charles Darwin

With respect to immortality, nothing shows me how strong and almost instinctive a belief is, as the consideration of the view now held by most physicists, namely that the sun with all the planets will in time grow too cold for life, unless indeed some great body dashes into the sun and thus gives it fresh life.–Believing as I do that man in the distant future will be a far more perfect creature than he now is, it is an intolerable thought that he and all other sentient beings are doomed to complete annihilation after such long-continued slow progress. To those who fully admit the immortality of the human soul, the destruction of our world will not appear so dreadful.

Charles Darwin, The Autobiography of Charles Darwin, London, 1958, p. 92

Adolfo Bioy Casares

–¿Qué haría usted si supiera con seguridad que un día determinado acaba el mundo?

–No diría nada, por causa de las criaturas–respondió Ramírez–, pero dejaría anotado en un papelito que en el día de la fecha era el fin del mundo, para que vieran que yo lo sabía.

Adolfo Bioy Casares, ‘Tema del fin del mundo’, in Guirnalda con amores, Buenos Aires, 1959, p. 99

Carl Sagan

In the littered field of discredited self-congratulatory chauvinisms, there is only one that seems to hold up, one sense in which we are special: Due to our own actions or inactions, and the misuse of our technology, we live at an extraordinary moment, for the Earth at least-the first time that a species has become able to wipe itself out. But this is also, we may note, the first time that a species has become able to journey to the planets and the stars. The two times, brought about by the same technology, coincide—a few centuries in the history of a 4.5-billion-year-old planet. If you were somehow dropped down on the Earth randomly at any moment in the past (or future), the chance of arriving at this critical moment would be less than 1 in10 million. Our leverage on the future is high just now.

Carl Sagan, Pale Blue Dot: A Vision of the Human Future in Space, New York, 1994, p. 305

Frank Tipler

[T]he death of Homo sapiens is an evil (beyond the death of the human individuals) only for a limited value system. What is humanly important is the fact that we think and feel, not the particular bodily form which clothes the human personality.

Frank Tipler, The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead, New York, 1995, p. 218

Richard Leakey and Roger Lewin

[T]he human mind is used to thinking in terms of decades or perhaps generations, not the hundreds of millions of years that is the time frame for life on Earth. Coming to grips with humanity in this context reveals at once our significance in Earth history, and our insignificance. There is a certainty about the future of humanity that cheats our mind’s comprehension: one day our species will be no more.

Richard Leakey and Roger Lewin, The Sixth Extinction: Patterns of Life and the Future of Humankind, New York, 1995, p. 224

Yew-Kwang Ng

[T]he real per capita income of the world now is about 7-8 times that of a century ago. If we proceed along an environmentally responsible path of growth, our great grandchildren in a century will have a real per capita income 5-6 times higher than our level now. Is it worth the risk of environmental disaster to disregard environmental protection now to try to grow a little faster? If this faster growth could be sustained, our great grandchildren would enjoy a real per capita income 7-8 times (instead of 5-6 times) higher than our level now. However, they may live in an environmentally horrible world or may well not have a chance to be born at all! The correct choice is obvious.

Yew-Kwang Ng, ‘Happiness Studies: Ways to Improve Comparability and Some Public Policy Implications’, The Economic Record, vol. 84, no. 265 (June, 2008), pp. 261-262

Carl Sagan :Carl Sagan:human extinction:population ethics:

Some have argued that the difference between the deaths of several hundred million people in a nuclear war (as has been thought until recently to be a reasonable upper limit) and the death of every person on Earth (as now seems possible) is only a matter of one order of magnitude. For me, the difference is considerably greater. Restricting our attention only to those who die as a consequence of the war conceals its full impact.

If we are required to calibrate extinction in numerical terms, I would be sure to include the number of people in future generations who would not be born. A nuclear war imperils all of our descendants, for as long as there will be humans. Even if the population remains static, with an average lifetime of the order of 100 years, over a typical time period ofor the biological evolution of a successful species (roughly ten million years), we are talking about some 500 trillion people yet to come. By this criterion, the stakes are one million times greater for extinction that for the more modest nuclear wars that kill “only” hundreds of millions of people.

There are many other possible measures of the potential loss—including culture and science, the evolutionary history of the planet, and the significance of the lives of all of our ancestors who contributed to the future of their descendants. Extinction is the undoing of the human enterprise.

Carl Sagan, ‘Nuclear War and Climatic Catastrophe: Some Policy Implications’, Foreign Affairs, vol. 62, no. 2 (Winter 1983), p. 275

Richard Posner

“[T]he conversion of humans to more or less immortal near-gods” that David Friedman describe[s] as the upside of galloping twenty-first-century scientific advance […] seems rather a dubious plus, and certainly less of one than extinction would be a minus, especially since changing us into “near-gods” could be thought itself a form of extinction rather than a boon because of the discontinuity between a person and a near-god. We think of early hominids as having become extinct rather than as having become us.

Richard Posner, Catastrophe: Risk and Response, New York, 2004, pp. 148-149

Brooke Alan Trisel

[T]he things we have created will eventually vanish once human beings are no longer around to preserve them. However, achievements are events, not things, and events that have occurred cannot be undone or reversed. Therefore, it will continue to be true that our achievements occurred even if humanity ends. One disadvantage of having an unalterable past is that we cannot undo a wrongdoing that occurred. However, an unalterable past is also an advantage in that our achievements can never be undone, which may give some consolation to those who desire quasi-immortality.

Brooke Alan Trisel, ‘Human Extinction and the Value of Our Efforts’, The Philosophical Forum, vol. 35, no. 3 (Fall, 2004), p. 390

Eliezer Yudkowsky

The Spanish flu of 1918 killed 25-50 million people. World War II killed 60 million people; 107 is the order of the largest catastrophes in humanity’s written history. Substantially larger numbers, such as 500 million deaths, and especially qualitatively different scenarios such as the extinction of the entire human species, seem to trigger a different mode of thinking—enter into a ‘separate magisterium’. People who would never dream of hurting a child hear of an existential risk, and say, ‘Well, maybe the human species doesn’t really deserve to survive.’

Eliezer Yudkowsky, ‘Cognitive Biases Potentially Affecting Judgement of Global Risks’, in Nick Bostrom and Milan M. Ćirković (eds.), Global Catastrophic Risks, Oxford, 2008, p. 114

Bruce Tonn

A simple thought experiment suggests that humans are earth-life’s best bet. In this experiment there are three key factors: the probability that humans can avoid extinction and transcend oblivion; the probability that new intelligent life would re-evolve if humans became extinct; and the probability that a newly evolved intelligent species could avoid its own extinction and transcend oblivion, assuming there is enough time to do so. To favour extinction of humans, the product of the second and third probabilities must be greater than the first probability.

Bruce Tonn, ‘Futures Sustainability’, Futures, vol. 39, no. 9 (November, 2007), p. 1100

Martin Rees

The stupendous time spans of the evolutionary past are not part of common culture–except among some creationists and fundamentalists. But most educated people, even if they are fully aware that our emergence took billions of years, somehow think we humans are the culmination of the evolutionary tree. That is not so. Our Sun is less than half way through its life. It is slowly brightening, but Earth will remain habitable for another billion years. However, even in that cosmic perspective—extending far into the future as well as into the past—the twenty-first century may be a defining moment. It is the first in our planet’s history where one species—ours—has Earth’s future in its hands and could jeopardise not only itself but also life’s immense potential.

Martin Rees, ‘Foreword’, in Nick Bostrom and Milan M. Ćirković (eds.), Global Catastrophic Risks, Oxford, 2008, p. xi

Martin Rees

The first aquatic creatures crawled onto dry land in the Silurian era, more than three hundred million years ago. They may have been unprepossessing brutes, but had they been clobbered, the evolution of land-based fauna would have been jeopardised. Likewise, the post-human potential is so immense that not even the most misanthropic amongst us would countenance its being foreclosed by human actions.

Martin Rees, Our Final Hour: A Scientist’s Warning: How Terror, Error and Environmental Disaster Threaten Humankind’s Future in this Century—On Earth and Beyond, New York, 2003, p. 183

Noam Chomsky

There are two ways for Washington to respond to the threats engendered by its actions and startling proclamations. One way is to try to alleviate the threats by paying some attention to legitimate grievances, and by agreeing to become a civilized member of a world community, with some respect for world order and its institutions. The other way is to construct even more awesome engines of destruction and domination, so that any perceived challenge, however remote, can be crushed–provoking new and greater challenges. That way poses serious dangers to the people of the US and the world, and may, very possibly, lead to extinction of the species–not an idle speculation.

Noam Chomsky, ‘Deep Concerns’, ZNet, March 20, 2003