My beliefs

My friend Brian Tomasik recently made available a table summarizing his beliefs on various issues. I thought recording my own credences on these propositions would be a fun and potentially instructive exercise, and decided to make my answers public. To prevent myself from being influenced by Brian’s responses, I copied the original table, pasted in on an Excel spreadsheet, deleted the column with his answers, and randomized the rows by sorting them in alphabetical order.  Only after recording all my responses did I allow myself to look at Brian’s, and I managed to resist the temptation to make any changes ex post facto. Overall, I was pleasantly surprised at the degree to which we agree, and not very surprised at the areas where we don’t agree.  I also suspect that a few of our disagreements (e.g. on compatibilism about free will) are merely verbal. Below, I comment on the propositions on which we disagree the most.

If you think you might want to participate in this exercise, you can make a duplicate of this spreadsheet and record your answers there before reading any further.

Update: See also Michael Dickens’ responses in the comments section.

Update 2: There is now a new version of the table below, which reflects my beliefs as of November 2020.

Belief Brian Pablo
“Aesthetic value: objective or subjective?” Answer: subjective 99.50% 99.90%
Artificial general intelligence (AGI) is possible in principle 99% 95%
Compatibilism on free will 98% 5%
“Abstract objects: Platonism or nominalism?” Answer: nominalism 97% 90%
Moral anti-realism 96% 20%
Humans will eventually build human-level AGI conditional on no other major intervening disruptions to civilization as we know it 90% 90%
We live in at least a Level I multiverse 85% 60%
Type-A physicalism regarding consciousness 75% 25%
Eternalism on philosophy of time 75% 98%
Earth will eventually be controlled by a singleton of some sort 72% 70%
Human-inspired colonization of space will cause net suffering if it happens 71% 1%
Many worlds interpretation of quantum mechanics (or close kin) 70% 70%
Soft AGI takeoff 70% 55%
By at least 10 years before human-level AGI is built, debate about AGI risk will be as mainstream as global warming is in 2015 67% 70%
Human-controlled AGI would result in less expected suffering than uncontrolled, as judged by the beliefs I would hold if I thought about the problem for another 10 years 65% 5%
A government will build the first human-level AGI, assuming humans build one at all 62% 60%
Climate change will cause net suffering 60% 50%
By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past 60% 10%
The effective-altruism movement, all things considered, reduces rather than increases suffering in the far future 60% 40%
Electing more liberal politicians reduces net suffering in the far future 57% 60%
Faster technological innovation increases net suffering in the far future 55% 55%
“Science: scientific realism or scientific anti-realism?” Answer: realism 60% 90%
At bottom, physics is digital 50%
Cognitive closure of some philosophical problems 50% 80%
Rare Earth explanation of Fermi Paradox 50% 25%
Crop cultivation prevents net suffering 50% 50%
Conditional on a government building the first human-level AGI, it will be the USA (rather than China, etc.) 50% 65%
Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments) 50% 10%
Faster economic growth will cause net suffering in the far future 43% 55%
Whole brain emulation will come before de novo AGI, assuming both are possible to build 42% 65%
Modal realism 40% 1%
The multiverse is finite 40% 70%
A world government will develop before human-level AGI 35% 15%
Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing 15% 8%
Humans will go extinct within millions of years for some reason other than AGI 10% 10%
A design very close to CEV will be implemented in humanity’s AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals) 5% 10%
Values
Value system Moral weight
Negative utilitarianism focused on extreme suffering 90% 1%
Ethical pluralism for other values (happiness, love, friendship, knowledge, accomplishment, diversity, paperclips, and other things that agents care about) 10% 0.01%
The kind of suffering that matters most is… Moral weight
hedonic experience 70% 99%
preference frustration 30% 1%

Comments

Compatibilism on free will

If by ‘free will’ we mean what most people mean by that expression, then I’m almost certain that we don’t have free will, and that free will is incompatible with determinism. Brian appears to believe otherwise because he thinks that the meaning of ‘free will’ should ultimately be governed by instrumental considerations.  I think this approach fosters muddled thinking and facilitates intellectual dishonesty.

Moral anti-realism

This is one in a series of very strong disagreements about questions in metaethics. I think some things in the world–pleasant states of consciousness–objectively have value, regardless of whether anyone desires them or has any other positive attitude towards them. This value, however, is realized in conscious experience and as such exists in the natural world, rather than being sui generis, as most moral realists believe. So the main arguments against moral realism, such as Mackie’s argument from queerness, do not apply to the view that I myself defend.

Type-A physicalism regarding consciousness

I’m somewhat persuaded by David Chalmers’ arguments for dualism, though I discount the apparent force of this evidence by the good track-record of physicalist explanations in most other domains. About half of my probability mass for physicalism is concentrated on type-B physicalism, hence my relatively low credence on the type of physicalism that Brian favors.

Eternalism on philosophy of time

I assume Brian just meant belief in the static, or tensless, theory of time (what McTaggart called the ‘B theory’), rather than some subtler view about temporal becoming. If so, I think the arguments against the dynamic or tensed theory are roughly as strong as the arguments against free will: both of these views are primarily supported by introspection (“it seems time flows”, “it seems I’m free”) and opposed by naturalism, and the latter is much better supported by the evidence.

Human-inspired colonization of space will cause net suffering if it happens

I’m puzzled by Brian’s answer here. I think it’s quite unlikely that the future will contain a preponderance of suffering over happiness, since it will be optimized by agents who strongly prefer the latter and will have driven most other sentient lifeforms extinct. But maybe Brian meant that colonization will cause a surplus of suffering relative to the amount present before colonization.  I think this is virtually certain; I’d give it a 99% chance.

Human-controlled AGI would result in less expected suffering than uncontrolled, as judged by the beliefs I would hold if I thought about the problem for another 10 years

Uncontrolled AGI will likely result in no suffering at all, whereas human-controlled AGI will result in some suffering and much more happiness. Brian thinks the goal systems that uncontrolled AGI might create will resemble paradigmatic sentient beings enough to count themselves as sentient, but I do not share his radically subjectivist stance concerning the nature of suffering (on which something counts as suffering roughly if, and to the degree that, we decide to care about it).

By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past

I slightly misread the sentence, which I took to state that most future people will regard factory farming as the great evil of the past.  Given the high number of alternative candidates (from a non-enlightened, common-sense human morality), I thought it unlikely that our descendants would single out factory farming as the single greatest evil in human history.  But it’s much more likely (~60%) that they will regard it as a great evil, especially if mass production of in vitro meat displaces factory farms (and hence removes the main cognitive barrier to widespread recognition that all sentient beings matter morally, and matter equally).

Science: scientific realism or scientific anti-realism?” Answer: realism

I think the best explanation of the extraordinary success of science is that it describes the world as it really is; on anti-realism, this success is a mystery. So I think realism is much more likely.

The effective-altruism movement, all things considered, reduces rather than increases suffering in the far future

I think the EA movement somewhat increases the probability of far-future suffering by increasing the probability that such a future will exist at all, to a greater degree than it reduces the suffering of far-future sentients conditional on their existence.

(Note that I believe the EA movement decreases the overall probability of net suffering in the far future. That is, EA will cause, in expectation, more suffering, but even more happiness.)

Cognitive closure of some philosophical problems

Reflection about our evolutionary origins makes it quite likely that we lack the cognitive machinery necessary for solving at least some such problems. We are probably among the stupidest intelligent species possible, since we were the first to fill the niche and we haven’t evolved much since. (But see Greg Egan for an opposing perspective.)

Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments)

I think extinction before colonization is quite likely; ~20% this century, and we have a long way to go.

Rare Earth explanation of Fermi Paradox

Very low confidence in my answer here; Brian is probably closer to the truth than I am.

Modal realism

Upon reflection, I think I was overconfident on this one; 5-10% seems like a more reasonable estimate. The main reason to believe modal realism is that it would provide an answer to the ultimate question of existence. However, the existence of nothing would have been an even more natural state of affairs than the existence of everything. Since this possibility manifestly does not obtain, that reduces the probability of modal realism (see Grover 1998).

Negative utilitarianism focused on extreme suffering

I see no reason to deviate from symmetry: suffering of a given intensity is as bad as happiness of that intensity is good; and suffering twice as intense is (only) twice as bad. Brian thinks there isn’t a natural intensity matching between happiness and suffering, but I disagree (and trust my judgment here).

Ethical pluralism for other values (happiness, love, friendship, knowledge, accomplishment, diversity, paperclips, and other things that agents care about)

I just cannot see how anything but experience could have value. As I view things, the choice is between some form of experientialism and nihilism, tertium non datur.

The kind of suffering that matters most is… preference frustration

Again, I have difficulty understanding how merely satisfying a preference, in the absence of any conscious experience of satisfaction, could matter at all morally. Preference theories are also beset by a number of problems; for example, it’s unclear how they can deal with preferences whose objects are spatially or temporally very removed from the subject, without either restricting the class of relevant preferences arbitrarily or implying gross absurdities.

With thanks to Brian Tomasik, Peter McIntyre, and others for valuable discussion.