Category Archives: John Broome

John Broome

If a catastrophe should really dominate our thinking, it will not be because of the people it kills. There will be other harms, of course. But the effect that seems the most potentially harmful is the huge number of people whose existence might be prevented by a catastrophe. If we become extinct within the next few thousand years, that will prevent the existence of tens of trillions of people, as a very conservative estimate. If those nonexistences are bad, then this is a consideration that might dominate our calculations of expected utility.

John Broome, ‘A Small Chance of Disaster’, European Review, vol. 21, no. S1 (July, 2013), p. 830

John Broome

Total and average utilitarianism are very different theories, and where they differ most is over extinction. If global warming extinguishes humanity, according to total utilitarianism, that would be an inconceivably bad disaster. The loss would be all the future wellbeing of all the people who would otherwise have lived. On the other hand, according to at least some versions of average utilitarianism, extinction might not be a very bad thing at all; it might not much affect the average wellbeing of the people who do live. So the difference between these theories makes a vast difference to the attitude we should take to global warming. According to total utilitarianism, although the chance of extinction is slight, the harm extinction would do is so enormous that it may well be the dominant consideration when we think about global warming. According to average utilitarianism, the chance of extinction may well be negligible.

John Broome, Counting the Cost of Global Warming, Cambridge, 1992, p. 121

John Broome

[D]espite what our intuition tells us, changes in the world’s population are not generally neutral. They are either a good thing or a bad thing. But it is uncertain even what form a correct theory of the value of population would take. In the area of population, we are radically uncertain. We do not know what value to set on changes in the world’s population. If the population shrinks as a result of climate change, we do not know how to evaluate that change. Yet we have reason to think that changes in population may be one of the most morally significant effects of climate change. The small chance of catastrophe may be a major component in the expected value of harm caused by climate change, and the loss of population may be a major component of the badness of catastrophe.

How should we cope with this new, radical sort of uncertainty? Uncertainty was the subject of chapter 7. That chapter came up with a definitive answer: we should apply expected value theory. Is that not the right answer now? Sadly it is not, because our new sort of uncertainty is particularly intractable. In most cases of uncertainty about value, expected value theory simply cannot be applied.

When an event leads to uncertain results, expected value theory requires us first to assign a value to each of the possible results it may lead to. Then it requires us to calculate the weighted average value of the results, weighted by their probabilities. This gives us the event’s expected value, which we should use in our decision-making.

Now we are uncertain about how to value the results of an event, rather than about what the results will be. To keep things simple, let us set aside the ordinary sort of uncertainty by assuming that we know for sure what the results of the event will be. For instance, suppose we know that a catastrophe will have the effect of halving the world’s population. Our problem is that various different moral theories of value evaluate this effect differently. How might we try to apply expected value theory to this catastrophe?

We can start by evaluating the effect according to each of the different theories of value separately; there is no difficulty in principle there. We next need to assign probabilities to each of the theories; no doubt that will be difficult, but let us assume we can do it somehow. We then encounter the fundamental difficulty. Each different theory will value the change in population according to its own units of value, and those units may be incomparable with one another. Consequently, we cannot form a weighted average of them.

For example, one theory of value is total utilitarianism. This theory values the collapse of population as the loss of the total well-being that will result from it. Its unit of value is well-being. Another theory is average utilitarianism. It values the collapse of population as the change of average well-being that will result from it. Its unit of value is well-being per person. We cannot take a sensible average of some amount of well-being and some amount of well-being per person. It would be like trying to take an average of a distance, whose unit is kilometers, and a speed, whose unit is kilometers per hour. Most theories of value will be incomparable in this way. Expected value theory is therefore rarely able to help with uncertainty about value.

So we face a particularly intractable problem of uncertainty, which prevents us from working out what we should do. Yet we have to act; climate change will not wait while we sort ourselves out. What should we do, then, seeing as we do not know what we should do? This too is a question for moral philosophy.

Even the question is paradoxical: it is asking for an answer while at the same time acknowledging that no one knows the answer. How to pose the question correctly but unparadoxically is itself a problem for moral philosophy.

John Broome, Climate Matters: Ethics in a Warming World, New York, 2012

John Broome

A few extra people now means some extra people in each generation through the future. There does not appear to be a stabilizing mechanism in human demography that, after some change, returns the population to what it would have been had the change not occurred.

John Broome, ‘Should We Value Population?’, The Journal of Philosophy, vol. 13, no. 4 (December, 2005), pp. 402-403

John Broome

Prichard seems to have thought […] that the normativity of morality cannot be explained at all. But that does not follow. Even if there is no instrumental explanation of its normativity, there may be an explanation of some other sort. It would truly be unsatisfactory if there was no explanation at all. It would be a bad blow to philosophy to find there are inexplicable facts.

John Broome, ‘Reply to Southwood, Kearns and Star, and Cullity’, Ethics, vol. 119, no. 1 (October, 2008), p. 98

John Broome

Global warming is disconcerting in one respect. It seems inevitable that its consequences will be large, just because of the unprecedented size and speed of the temperature changes. Yet it is very hard to know just what these large consequences will be.

John Broome, Counting the Cost of Global Warming: A Report to the Economic and Social Research Council on Research, Cambridge, 1992, p. 9

John Broome

Epicurus’s hedonism actually implies that death normally harms you. Epicurus thinks it implies the opposite, but he is mistaken. He is right that there is no time when death harms you, but it does not follow that death does not harm you. It may harm you, even though it harms you at no time.

John Broome, ‘What Is Your Life Worth?’, Dædalus, vol. 137, no. 1 (January, 2008), p. 52

John Broome

[On Parfit’s view], the boundaries within lives are like the boundaries between lives. So we do not regard people as the morally significant units. This only means that if we are concerned with distribution at all, we shall be concerned with distribution between what are the morally significant units—namely, person-segments or whatever these divisions of a person are. So certainly the fact that a person has suffered more in the past will not make us give extra weight to relieving her suffering now. But if she is suffering more now, we may give extra weight to it. We may be concerned to equalize the distribution of good between person-segments. So all this argument does is remind us that we have changed the units of distribution. It does not suggest that we should be less interested in distribution between them.

John Broome, ‘Utilitarian Metaphysics?’, in Jon Elster and John Roemer (eds.), Interpersonal Comparisons of Well-Being, Cambridge, 1991, p. 94

John Broome

[Pain] is a bad thing in itself. It does not matter who experiences it, or where it comes in a life, or where in the course of a painful episode. Pain is bad; it should not happen. There should be as little pain as possible in the world, however it is distributed across people and across time.

John Broome, ‘More Pain or Less?’, Analysis, vol. 56, no. 2 (April, 1996), p. 117

John Broome

Until the 1970s, utilitarianism held a dominant position in the practical moral philosophy of the English-speaking world. Since that time, it has had a serious rival in contractualism, a, ethical theory that was relaunched into modern thinking in 1971 by John Rawls’s Theory of Justice. There were even reports of utilitarianism’s imminent death. But utilitarianism is now in a vigorous and healthy state. It is responding to familiar objections. It is facing up to new problems such as the ethics of population. It has revitalized its foundation with new arguments. It has radically changed its conception of human wellbeing. It remains a credible moral theory.

John Broome, ‘Modern Utilitarianism’, in Peter Newman (ed.), The New Palgrave Dictionary of Economics and the Law, London, 1998, p. 656

John Broome

[W]e have no reason to trust anyone’s intuitions about very large numbers, however excellent their philosophy. Even the best philosophers cannot get an intuitive grasp of, say, tens of billions of people. That is no criticism; these numbers are beyond intuition. But these philosophers ought not to think their intuition can tell them the truth about such large numbers of people.

For very large numbers, we have to rely on theory, not intuition. When people first built bridges, they managed without much theory. They could judge a log by eye, relying on their intuition. Their intuitions were reliable, being built on long experience with handling wood and stone. But when people started spinning broad rivers with steel and concrete, their intuition failed them, and they had to resort to engineering theory and careful calculations. The cables that support suspension bridges are unintuitively slender.

Our moral intuitions are formed and polished in our homely interactions with the few people we have to deal with in ordinary life. But nowadays the scale of our societies and the power of our technologies raise moral problems that involve huge numbers of people. […] No doubt our homely intuitive morality gives us a starting point, but we have to project our morality beyond the homely to the vast new arenas. To do this properly, we have to engage all the care and accuracy we can, and develop a moral theory.

Indeed, we are more dependent on theory than engineers are, because moral conclusions cannot be tested in the way engineers’ conclusions are tested. If an engineer gets her calculations wrong, her mistake will be revealed when the bridge falls down. But a mistake in moral theory is never revealed like that. If we do something wrong, we do not later see the error made manifest; we can only know it is an error by means of theory too. Moreover, our mistakes can be far more damaging and kill far more people than the collapse of a bridge. Mistakes in allocating healthcare resources may do great harm to millions. So we have to be exceptionally careful in developing our moral theory.

John Broome, Weighing Lives, Oxford, 2004, pp. 56-57

John Broome

The Pareto principle is, I think, untrue. It is an ill-begotten hybrid. It tries to link individual preferences with general good. But one should either link individual preferences with what should come about, as the democratic principle does, or individual good with general good, as the principle of general good does. The hybrid is no viable.

John Broome, Weighing Goods, Oxford, 1991, p. 159