Paul Christiano on cause prioritization

Paul Christiano is a graduate student in computer science at UC Berkeley. His academic research interests include algorithms and quantum computing. Outside academia, he has written about various topics of interest to effective altruists, with a focus on the far future.  Christiano holds a BA in mathematics from MIT and has represented the United States at the International Mathematical Olympiad. He is a Research Associate at the Machine Intelligence Research Institute and a Research Advisor at 80,000 Hours.


Pablo: To get us started, could you explain what you mean by ’cause prioritization’, and briefly discuss the various types of cause prioritization research that are currently being conducted?

Paul: I mean research that helps determine which broad areas of investment are likely to have the largest impact on the things we ultimately care about. Of course a huge amount of research bears on this question, but I’m most interested in research that addresses its distinctive characteristics. In particular, I’m most interested in:

  1. Research that draws on what is known about different areas in order to actually make these comparisons. I think GiveWell Labs and the Copenhagen Consensus Center are the two most salient examples of this work, though they have quite different approaches. I understand that the Centre for Effective Altruism (CEA) is beginning to invest in this area as well. I think this is an area where people will be able to get a lot of traction (and have already done pretty well for the amount of investment I’m aware of) and I think it will probably go a very long way towards facilitating issue-agnostic giving.
  2. Research that aims to understand and compare the long-term impacts of the short-term changes which our investments can directly bring about. For example, research that clarifies and compares the long-term impact of poverty alleviation, technological progress, or environmental preservation, and how important that long-term impact is. This is an area where answers are much harder to come by, but even slight improvements in our understanding would have significant importance for a very broad range of decisions. It appears that high-quality work in this area is pretty rare, though it’s a bit hard to tell if this is due to very little investment or if this is merely evidence that making progress on these problems is too difficult. I tend to lean towards the former, because (a) we see very little public discussion of process and failed attempts for high-quality research on these issues, which you should expect to see even if they are quite challenging, and (b) this is not an area that I expect to receive a lot of investment except by cause-agnostic altruists who are looking quite far ahead. I think the most convincing example to date is Nick Bostrom’s astronomical waste argument and Nick Beckstead’s more extensive discussion of the importance of the far future, which seem to take a small but reasonably robust step towards improving our understanding of what to do.

There are certainly other relevant research areas, but they tend to be less interesting as cause prioritization per se. For example, there is a lot of work that tries to better understand the impact of particular interventions. I think this is comparably important to (1) or (2), but that it currently receives quite a lot more attention at the moment and it’s not clear that a cause-agnostic philanthropist would want to change how this is being done. More tangentially, efforts to improve forecasts more broadly have significant relevance for philanthropic investment, though they are even more important in other domains so prima facie it would be a bit surprising if these efforts ought to be a priority by virtue of their impact on improving philanthropic decision-making.

Pablo: In public talks and private conversation, you have argued that instead of supporting any of the object-level interventions that look most promising on the evidence currently available, we should on the current margin invest in research on understanding which of those opportunities are most effective.  Could you give us an outline of this argument?

Paul: It seems very likely to me that more research will lead to a much clearer picture of the relative merits of different opportunities, so I suspect in the future we will be much better equipped to pick winners. I would not be at all surprised if supporting my best guess charity ten years from now was several times more impactful than supporting my best guess charity now.

If you are this optimistic about learning more, then it is generally better to donate to your best guess charity in a decade, rather than donating to your current best guess. But if you think there is room for more funding to help accelerate that learning process, then that might be an even better idea. I think this is the case at the moment: the learning process is mostly driven by people doing prioritization research and exploratory philanthropy, and total investment in that area is not very large.

Of course, giving to object level interventions may be an important part of learning more, and so I would be hesitant to say that we should avoid investment in object-level problems. However, I think that investment should really be focused on learning and exploring (in a way that can help other people make these decisions as well, not just the individual donor) rather than for a direct positive impact.  So for example I’m not very interested in scaling up successful global health interventions.

The most salient motivation to do good now, rather than learning or waiting, is a discount rate that is much steeper than market rates of return.

For example, you might give now if you thought your philanthropic investments would earn very high effective rates of return. I think this is unlikely for the kinds of object-level investments most philanthropists consider–I think most of these investments compound roughly in line with the global growth rate (which is smaller than market rates of return).

You might also have a high discount rate if you thought that the future was likely to have much worse philanthropic opportunities; but as far as I can tell a philanthropist today has just as many problems to solve as a philanthropist 20 years ago, and frankly I can see a lot of possible problems on the horizon for a philanthropist to invest in, so I find this compelling.

Sometimes “movement-building” is offered as an example of an activity with very high rates of returns. At the moment I am somewhat skeptical of these claims, and my suspicion is that it is more important for the “effective altruism” movement to have a fundamentally good product and to generally have our act together than for it to grow more rapidly, and I think one could also give a strong justification for prioritization research even if you were primarily interested in movement-building. But that is a much longer discussion.

Pablo: I think it would be interesting to examine more closely the object-level causes supported by EAs or proto-EAs in the past (over, say, the last decade), and use that examination to inform our estimates about the degree to which the value of future EA-supported causes will exceed that of causes that EAs support today.  Off the top of my head, the EAs I can think of who have donation records long enough to draw meaningful conclusions all have in the past supported causes that they would now regard as being significantly worse than those they currently favour.  So this would provide further evidence for one of the premises in your argument: that cause prioritization research can uncover interventions of high impact relative to our current best guesses.

The other premise in your argument, as I understand it, is that the value of the interventions we should expect cause prioritization research to uncover is high relative to the opportunity cost of current spending. Can you elaborate on the considerations that are, in your opinion, relevant for assessing this premise?

Paul: Sorry, this is going to be a bit of a long and technical response. I see three compelling reasons to prefer giving today to giving in the future. But altogether they don’t seem to be a big deal compared to how much more we would expect to know in the future. Again, I think that using giving as an opportunity to learn stands out as an exception here–because in that case we can say with much more confidence that we will need to learn more at some point, and so the investment today is not much of a lost cause.

  1. The actual good you do in the world compounds over time, so it is better to do good sooner than later.
  2. There are problems today that won’t exist in the future, so money in the future may be substantially less valuable than money today.
  3. In the future there will be a larger pool of “smart money” that finds the best charitable opportunities, so there will be fewer opportunities to do good.

Regarding (1), I think that the vast majority of charitable activities people engage in simply do not compound that quickly. To illustrate, you might consider the case of a cash transfer to a poor family. Initially such a cash transfer earns a very high rate of return, but over time the positive impact diffuses over a broader and broader group of people. As it diffuses to a broader group, the returns approach the general rate of growth in the world, which is substantially smaller than the interest rate. Most other forms of good experience a very similar pattern. So if this were the only reason to give sooner, then I think that you would actually be better served by saving and earning prevailing interest rates for an extra year, and then donating a year later–even if you didn’t expect to learn anything new.

A mistake I sometimes see people make is using the initial rates of return on an investment to judge its urgency. But those returns last for a brief period before spreading out into the broader world, so you should really think of the investment as giving you a fixed multiplier on your dollar before spreading out and having a long-term returns that go like growth rates. It doesn’t matter whether that multiplier accrues instantaneously or over a period of a few years during which you enjoy excess returns. In either case the magnitude of the multiplier is not relevant to the urgency of giving, just whether the multiplier is going up or down.

A category of good which is plausibly exceptional here is creating additional resources that will flexibly pursue good opportunities in the future. I’m aware that some folks around CEA assign very high rates of return, in excess of 30% / year, to investment in movement-building and outreach. I think this is an epistemic error, but that would probably be a longer discussion so it might be easier to restrict attention to object-level interventions vs. focusing on learning.

Regarding (2), I don’t really see the evidence for this position. From my perspective the problems the world faces today seem more important–in particular, they have more significant long-term consequences–than the problems the world faced 200 years ago. It looks to me like this trend is likely to continue, and there is a good chance that further technological development will continue to introduce problems with an unprecedented potential impact on the future. So with respect to object-level work I’d prefer to address the problems of today than the problems of 200 years ago, and I think I’d probably be even happier addressing the problems we face 50 years.

Regarding (3), I do see this as a fairly compelling reason to move sooner rather than later. I think the question is one of magnitudes: how long do you expect it will take before the pool of “smart money” is twice as large? 10 times larger? I think it’s very easy to overestimate the extent to which this group is growing. It is only at extremely exceptional points in history that this pool can grow 20% faster than economic growth. For example, if you are starting from a baseline of 0.1% of total philanthropic spending, that can only go up 20% per year for 40 years or so before you get to 100% of spending. On the flip side, I think it is pretty easy to look around at what is going on locally and mistakenly conclude that the world must be changing pretty rapidly.

I think most of what we are seeing isn’t a changing balance between smart money and ordinary folk, it’s continuously increasing sophistication on the part donors collectively–this is a process that can go on for a very long time. In this case, it’s not so clear why you would want to give earlier when you are one of many unsophisticated donors rather than giving later when you are one of many sophisticated donors, even if you were only learning as fast as everyone else in the world. The thing that drives the discount rate earlier was the belief that other donors were getting sophisticated faster than we are, so that our relative importance was shrinking. And that no longer seems to apply when you look at it as a community increasing in sophistication.

So overall, I see the reasons for urgency in giving to be relatively weak, and I think the question of whether to give or save would be ambiguous (setting aside psychological motivations, the desire to learn more, and social effects of giving now) even if we weren’t learning more.

Pablo: Recently, a few EAs have questioned that charities vary in cost-effectiveness to the degree that is usually claimed within the EA community.  Brian Tomasik, for instance, argues that charities differ by at most 10 to 100 times (and much less so within a given field). Do you think that arguments of this sort could weaken the case for supporting research into cause prioritization, or change the type of cause prioritization research that EAs should support?

Paul: I think there are two conceptually distinct issues here which should be discussed separately, at least in this context.

One is the observation that a small group looking for good buys may not have as large an influence as it seems, if they will just end up slightly crowding out a much larger pool of thoughtful money. The bar for “thoughtful” isn’t that high, it just needs to be sensitive to diminishing returns in the area that is funded. There are two big reasons why this is not so troubling:

  • Money that is smart enough to be sensitive to diminishing marginal returns–and moreover which is sufficiently cause-agnostic to move between different fields on the basis of efficiency considerations–is also likely to be smart enough to respond to significant changes in the evidence and arguments for a particular intervention. So I think doing research publicly and contributing to a stock of public knowledge about different causes is not subject to such severe problems.
  • The number of possible giving opportunities is really quite large compared to the number of charitable organizations. If you are looking for the best opportunities, in the long-term you are probably going to be contributing to the existence of new organizations working in areas which would not otherwise exist. This is especially true if we expect to use early investigations to help direct our focus in later investigations. This is very closely related to the last point.

This issue is most severe when we consider trying to pursue idiosyncratic interests, like an unusually large degree of concern for the far future. So this consideration does make me a bit less enthusiastic about that, which is something I’ve written about before. Nevertheless, I think in that space there are many possible opportunities which are simply not going to get any support from people who aren’t thinking about the far future, so there still seems to be a lot of good to do by improving our understanding.

A second issue is that broad social improvements will tend to have a positive effect on society’s ability to resolve many different problems. So if there is any exceptionally impactful thing for society to do, then that will also multiply the impact of many different interventions. I don’t think this consideration says too much about the desirability of prioritization: quite large differences are very consistent with this observation, these differences can be substantially compounded by uncertainty about whether the indirect effects of an intervention are good and bad, and there is substantial variation even in very broad measures of the impact of different interventions. This consideration does suggest that you should pay more attention to very high-impact interventions even if the long-term significance of that impact is at first ambiguous.

Pablo: Finally, what do you think is the most effective way to promote cause prioritization research?  If an effective altruist reading this interview is persuaded by your arguments, what should this person do?

Paul: One conclusion is that it would be premature to settle on an area that currently looks particularly attractive and simply scale up the best-looking program in that area. For example, I would be hesitant to support an intervention in global health (or indeed in most areas) unless I thought that supporting that intervention was a cost-effective way to improve our understanding of global health more broadly. That could be because executing the intervention would provide useful information and understanding that could be publicly shared, or because supporting it would help strengthen the involvement of EA’s in the space and so help EA’s in particular improve their understanding. One could say the same thing about more speculative causes: investments that don’t provide much feedback or help us understand the space better are probably not at the top of my priority list.

Relatedly, I think that global health receives a lot of attention because it is a particularly straightforward area to do good in; I think that’s quite important if you want your dollar to do as much good directly as possible, but that it is much less important (and important in a different way) if you are paying appropriate attention to the value of learning and information.

Another takeaway is that it may be worth actively supporting this research, either by supporting organizations that do it or by giving on the basis of early research. I think Good Ventures and GiveWell Labs are currently the most credible effort in this space (largely by virtue of having done much more research in this space than any other comparably EA-aligned organization), and so providing support for them to scale up that research is probably the most straightforward way to directly support cause prioritization. There are some concerns about GiveWell Labs capturing only half of marginal funding, or about substituting with Good Ventures’ funding; that would again be a longer and much more complicated discussion. My view would be that those issues are worth thinking about but probably not deal-breakers.

I hear that CEA may be looking to invest more in this area going forward, and so supporting CEA is also a possible approach. To date they have not spent much time in this area and so it is difficult to predict what the output will look like. To the extent that this kind of chicken-and-egg problem is a substantial impediment to trying new things faster and you have confidence in CEA as an organization, providing funding to help plausible-looking experiments get going might be quite cost-effective.

A final takeaway is that the balance between money and human capital looks quite different for philanthropic research than for many object-level interventions. If an EA is interested in scaling up proven interventions, it’s very likely that their comparative advantage is elsewhere and they are better served by earning money and distributing it to charities doing the work they are most excited about. But if you think that increasing philanthropic capacity is very important, it becomes more plausible that the best use of time for a motivated EA is to work directly on related problems. That might mean working for an “EA” organization, working within the philanthropy sector more broadly, or pursuing a career somewhere else entirely. Once we are talking about applying human capital rather than money, ability and enthusiasm for a particular project becomes a very large consideration (though the kinds of considerations we’ve discussed in this interview can be another important input).

Crossposted to the Effective Altruism Blog