Computers routinely do mathematics that no unaided human can manage, outperform world champions in checkers and grand masters in chess, speak and understand English and other languages, write presentable short stories and musical compositions, learn from their mistakes, and competently pilot ships, airplanes, and spacecraft. Their abilities steadily improve. They’re getting smaller, faster, and cheaper. Each year, the tide of scientific advance laps a little further ashore on the island of human intellectual uniqueness with its embattled castaways. If, at so early a stage in our technological evolution, we have been able to go so far in creating intelligence out of silicon and metal, what will be possible in the following decades and centuries? What happens when smart machines are able to manufacture smarter machines?
Carl Sagan, Pale Blue Dot: A Vision of the Human Future in Space, New York, 1994, pp. 29-30
In the littered field of discredited self-congratulatory chauvinisms, there is only one that seems to hold up, one sense in which we are special: Due to our own actions or inactions, and the misuse of our technology, we live at an extraordinary moment, for the Earth at least-the first time that a species has become able to wipe itself out. But this is also, we may note, the first time that a species has become able to journey to the planets and the stars. The two times, brought about by the same technology, coincide—a few centuries in the history of a 4.5-billion-year-old planet. If you were somehow dropped down on the Earth randomly at any moment in the past (or future), the chance of arriving at this critical moment would be less than 1 in10 million. Our leverage on the future is high just now.
Carl Sagan, Pale Blue Dot: A Vision of the Human Future in Space, New York, 1994, p. 305
It might be a familiar progression, transpiring on many worlds—a planet, newly formed, placidly revolves around its star; life slowly forms; a kaleidoscopic procession of creatures evolves; intelligence emerges which, at least up to a point, confers enormous survival value; and the technology is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by experiment, and that knowledge of these laws can be made both to save and to take lives, both on unprecedented scales. Science, they recognize, grants immense powers. In a flash, they create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others are not so lucky or so prudent, perish.
Carl Sagan, Pale Blue Dot: A Vision of the Human Future in Space, New York, 1994, pp. 305-306
Some have argued that the difference between the deaths of several hundred million people in a nuclear war (as has been thought until recently to be a reasonable upper limit) and the death of every person on Earth (as now seems possible) is only a matter of one order of magnitude. For me, the difference is considerably greater. Restricting our attention only to those who die as a consequence of the war conceals its full impact.
If we are required to calibrate extinction in numerical terms, I would be sure to include the number of people in future generations who would not be born. A nuclear war imperils all of our descendants, for as long as there will be humans. Even if the population remains static, with an average lifetime of the order of 100 years, over a typical time period ofor the biological evolution of a successful species (roughly ten million years), we are talking about some 500 trillion people yet to come. By this criterion, the stakes are one million times greater for extinction that for the more modest nuclear wars that kill “only” hundreds of millions of people.
There are many other possible measures of the potential loss—including culture and science, the evolutionary history of the planet, and the significance of the lives of all of our ancestors who contributed to the future of their descendants. Extinction is the undoing of the human enterprise.
Carl Sagan, ‘Nuclear War and Climatic Catastrophe: Some Policy Implications’, Foreign Affairs, vol. 62, no. 2 (Winter 1983), p. 275
Humans—who enslave, castrate, experiment on, and fillet other animals—have had an understandable penchant for pretending that animals do not feel pain. On whether we should grant some modicum of rights to other animals, the philosopher Jeremy Bentham stressed that the question was not how smart they are, but how much torment they can feel. […] From all criteria available to us—the recognizable agony in the cries of wounded animals, for example, including those who usually utter hardly a sound—this question seems moot. The limbic system in the human brain, known to be responsible for much of the richness of our emotional life, is prominent throughout the mammals. The same drugs that alleviate suffering in humans mitigate the cries and other signs of pain in many other animals. It is unseemly of us, who often behave so unfeelingly toward other animals, to contend that only humans can suffer.
Carl Sagan & Ann Druyan, Shadows of Forgotten Ancestors, New York, 1992, pp. 371-372
It seems to me what is called for is an exquisite balance between two conflicting needs: the most skeptical scrutiny of all hypotheses that are served up to us and at the same time a great openness to new ideas. If you are only skeptical, then no new ideas make it through to you. On the other hand, if you are open to the point of gullibility and have not an ounce of sceptical sense in you, then you cannot distinguish useful ideas from the worthless ones. If all ideas have equal validity then you are lost, because the, it seems to me, no ideas have any validity at all.
Carl Sagan, ‘The Burden of Skepticism’, 1987
I try not to think with my gut. If I’m serious about understanding the world, thinking with anything besides my brain, as tempting as that might be, is likely to get me into trouble. Really, it’s okay to reserve judgment until the evidence is in.
Carl Sagan, The Demon-Haunted World: Science as a Candle in the Dark, London, 1996, p. 170
Unexpected discoveries are useful for calibrating pre-existing ideas. G. W. F. Hegel has had a very powerful imprint on professional philosophy of the nineteenth and early twentieth centuries and a profound influence on the future of the world because Karl Marx took him very seriously (although sympathetic critics have argued that Marx’s arguments would have been more compelling had he never heard of Hegel). In 1799 or 1800 Hegel confidently stated, using presumably the full armamentarium of philosophy available to him, that no new celestial objects could exist within the solar system. One year later, the asteroid Ceres was discovered. Hegel then seems to have returned to pursuits less amenable to disproof.
Carl Sagan, Broca’s Brain: Reflections on the Romance of Science, New York, 1974, p. 235