What I believe

My friend Brian Tomasik recently made available a table summarizing his beliefs on various issues. I thought this would be a fun and potentially instructive exercise, so I decided to do the same. To prevent myself from being influenced by Brian’s responses, I copied the original table onto an Excel spreadsheet,  deleted the column with his answers and randomized the rows (by sorting them in alphabetical order).  Only after recording all my responses did I allow myself to look at Brian’s, and I managed to resist the temptation to make any changes ex post facto. Overall, I was pleasantly surprised at the degree to which we agree, and not very surprised at the areas where we don’t agree.  I also suspect that a few of our disagreements (e.g. on compatibilism about free will) are merely verbal. Below, I comment on the propositions on which we disagree the most.

If you think you might want to participate in this exercise, you can make a duplicate of this spreadsheet and record your answers there before reading any further.

Belief Brian Pablo
“Aesthetic value: objective or subjective?” Answer: subjective 99.50% 99.90%
Artificial general intelligence (AGI) is possible in principle 99% 95.00%
Compatibilism on free will 98% 5.00%
“Abstract objects: Platonism or nominalism?” Answer: nominalism 97% 90.00%
Moral anti-realism 96% 20.00%
Humans will eventually build human-level AGI conditional on no other major intervening disruptions to civilization as we know it 90% 90.00%
We live in at least a Level I multiverse 85% 60.00%
Type-A physicalism regarding consciousness 75% 25.00%
Eternalism on philosophy of time 75% 98.00%
Earth will eventually be controlled by a singleton of some sort 72% 70.00%
Human-inspired colonization of space will cause net suffering if it happens 71% 1.00%
Many worlds interpretation of quantum mechanics (or close kin) 70% 70.00%
Soft AGI takeoff 70% 55.00%
By at least 10 years before human-level AGI is built, debate about AGI risk will be as mainstream as global warming is in 2015 67% 70.00%
Human-controlled AGI would result in less expected suffering than uncontrolled, as judged by the beliefs I would hold if I thought about the problem for another 10 years 65% 5.00%
A government will build the first human-level AGI, assuming humans build one at all 62% 60.00%
Climate change will cause net suffering 60% 50.00%
By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past 60% 10.00%
The effective-altruism movement, all things considered, reduces rather than increases suffering in the far future 60% 40.00%
Electing more liberal politicians reduces net suffering in the far future 57% 60.00%
Faster technological innovation increases net suffering in the far future 55% 55.00%
“Science: scientific realism or scientific anti-realism?” Answer: realism 60% 90.00%
At bottom, physics is digital 50%
Cognitive closure of some philosophical problems 50% 80.00%
Rare Earth explanation of Fermi Paradox 50% 25.00%
Crop cultivation prevents net suffering 50% 50.00%
Conditional on a government building the first human-level AGI, it will be the USA (rather than China, etc.) 50% 65.00%
Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments) 50% 10.00%
Faster economic growth will cause net suffering in the far future 43% 55.00%
Whole brain emulation will come before de novo AGI, assuming both are possible to build 42% 65.00%
Modal realism 40% 1.00%
The multiverse is finite 40% 70.00%
A world government will develop before human-level AGI 35% 15.00%
Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing 15% 8.00%
Humans will go extinct within millions of years for some reason other than AGI 10% 10.00%
A design very close to CEV will be implemented in humanity’s AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals) 5% 10.00%
Values
Value system Moral weight
Negative utilitarianism focused on extreme suffering 90% 1.00%
Ethical pluralism for other values (happiness, love, friendship, knowledge, accomplishment, diversity, paperclips, and other things that agents care about) 10% 0.01%
The kind of suffering that matters most is… Moral weight
hedonic experience 70% 99.00%
preference frustration 30% 1.00%

Comments

Compatibilism on free will

If by ‘free will’ we mean what most people mean by that expression, then I’m almost certain that we don’t have free will, and that free will is incompatible with determinism. Brian appears to believe otherwise because he thinks that the meaning of ‘free will’ should ultimately be governed by instrumental considerations.  I think this approach fosters muddled thinking and facilitates intellectual dishonesty.

Moral anti-realism

This is one in a series of very strong disagreements about questions in metaethics. I think some things in the world–pleasant states of consciousness–objectively have value, regardless of whether anyone desires them or has any other positive attitude towards them. This value, however, is realized in conscious experience and as such exists in the natural world, rather than being sui generis, as most moral realists believe. So the main arguments against moral realism, such as Mackie’s argument from queerness, do not apply to the view that I myself defend.

Type-A physicalism regarding consciousness

I’m somewhat persuaded by David Chalmers’ arguments for dualism, though I discount the apparent force of this evidence by the good track-record of physicalist explanations in most other domains. About half of my probability mass for physicalism is concentrated on type-B physicalism, hence my relatively low credence on the type of physicalism that Brian favors.

Eternalism on philosophy of time

I assume Brian just meant belief in the static, or tensless, theory of time (what McTaggart called the ‘B theory’), rather than some subtler view about temporal becoming. If so, I think the arguments against the dynamic or tensed theory are roughly as strong as the arguments against free will: both of these views are primarily supported by introspection (“it seems time flows”, “it seems I’m free”) and opposed by naturalism, and the latter is much better supported by the evidence.

Human-inspired colonization of space will cause net suffering if it happens

I’m puzzled by Brian’s answer here. I think it’s quite unlikely that the future will contain a preponderance of suffering over happiness, since it will be optimized by agents who strongly prefer the latter and will have driven most other sentient lifeforms extinct. But maybe Brian meant that colonization will cause a surplus of suffering relative to the amount present before colonization.  I think this is virtually certain; I’d give it a 99% chance.

Human-controlled AGI would result in less expected suffering than uncontrolled, as judged by the beliefs I would hold if I thought about the problem for another 10 years

Uncontrolled AGI will likely result in no suffering at all, whereas human-controlled AGI will result in some suffering and much more happiness. Brian thinks the goal systems that uncontrolled AGI might create will resemble paradigmatic sentient beings enough to count themselves as sentient, but I do not share his radically subjectivist stance concerning the nature of suffering (on which something counts as suffering roughly if, and to the degree that, we decide to care about it).

By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past

I slightly misread the sentence, which I took to state that most future people will regard factory farming as the great evil of the past.  Given the high number of alternative candidates (from a non-enlightened, common-sense human morality), I thought it unlikely that our descendants would single out factory farming as the single greatest evil in human history.  But it’s much more likely (~60%) that they will regard it as a great evil, especially if mass production of in vitro meat displaces factory farms (and hence removes the main cognitive barrier to widespread recognition that all sentient beings matter morally, and matter equally).

Science: scientific realism or scientific anti-realism?” Answer: realism

I think the best explanation of the extraordinary success of science is that it describes the world as it really is; on anti-realism, this success is a mystery. So I think realism is much more likely.

The effective-altruism movement, all things considered, reduces rather than increases suffering in the far future

I think the EA movement somewhat increases the probability of far-future suffering by increasing the probability that such a future will exist at all, to a greater degree than it reduces the suffering of far-future sentients conditional on their existence.

(Note that I believe the EA movement decreases the overall probability of net suffering in the far future. In other words, EA will likely cause more suffering, but even more happiness.)

Cognitive closure of some philosophical problems

Reflection about our evolutionary origins makes it quite likely that we lack the cognitive machinery necessary for solving at least some such problems. We are probably among the stupidest intelligent species possible, since we were the first to fill the niche and we haven’t evolved much since. (But see Greg Egan for an opposing perspective.)

Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments)

I think extinction before colonization is quite likely; ~20% this century, and we have a long way to go.

Rare Earth explanation of Fermi Paradox

Very low confidence here; Brian is probably closer to the truth than I am.

Modal realism

Upon reflection, I think I was overconfident on this one; 5-10% seems like a more reasonable estimate. The main reason to believe modal realism is that it would provide an answer to the ultimate question of existence. However, the existence of nothing would have been an even more natural state of affairs than the existence of everything. Since this possibility manifestly does not obtain, that reduces the probability of modal realism (see Grover 1998).

Negative utilitarianism focused on extreme suffering

I see no reason to deviate from symmetry: suffering of a given intensity is as bad as happiness of that intensity is good; and suffering twice as intense is (only) twice as bad. Brian thinks there isn’t a natural intensity matching between happiness and suffering, but I disagree (and trust my judgment here).

Ethical pluralism for other values (happiness, love, friendship, knowledge, accomplishment, diversity, paperclips, and other things that agents care about)

I just cannot see how anything but experience could have value. As I view things, the choice is between some form of experientialism and nihilism, tertium non datur.

The kind of suffering that matters most is… preference frustration

Again, I have difficulty understanding how merely satisfying a preference, in the absence of any conscious experience of satisfaction, could matter at all morally. Preference theories are also beset by a number of problems; for example, it’s unclear how they can deal with preferences whose objects are spatially or temporally very removed from the subject, without either restricting the class of relevant preferences arbitrarily or implying gross absurdities.

With thanks to Brian Tomasik, Peter McIntyre, and others for valuable discussion.

  • This is great, Pablo! Honestly, it was more densely insightful than almost any single blog post I’ve read in a while.

    I too was amazed by some of the close or even exact matches on numbers that we haven’t discussed before.

    > Human-inspired colonization of space will cause net suffering if it happens

    Sorry for the unclear wording. I changed it to: “Human-inspired colonization of space will cause more suffering than it prevents if it happens”. Since I usually only think about the suffering side of things, “net suffering” meant something different to my mind than to yours. ;)

    > I think extinction before colonization is quite likely; ~20% this century, and we have a long way to go.

    In another answer, you agreed the chance of non-AGI extinction was ~10%. Does this mean you think not all AGIs would colonize the galaxy? Or by “Earth-originating intelligence” did you mean “human-inspired intelligence” specifically?

    > Brian thinks the goal systems that uncontrolled AGI might create will resemble paradigmatic sentient beings enough to count themselves as sentient, but I do not share his radically subjectivist views about the nature of suffering

    Uncontrolled AGIs might also run simulations of aliens to learn about life in the cosmos. But maybe the numerosity of simulated aliens would be pretty low and wouldn’t compare with suffering in human-inspired sims?

    > About half of my probability mass for physicalism is concentrated on type-B physicalism, hence the my relatively low credence on the type of physicalism that Brian favors.

    Do you have thoughts on the argument that type-B physicalism is actually just property dualism (http://reducing-suffering.org/hard-problem-consciousness/#Type-B_physicalism_is_disguised_property_dualism)?

    • Thanks, Brian, for the comment and for the nice words.

      I too was amazed by some of the close or even exact matches on numbers that we haven’t discussed before.

      Yeah. They say people shouldn’t trust utilitarians, so some readers may suspect I peaked at your answers beforehand or adjusted my ratings afterwards, despite my having denied that I did either. But I have an irrational, non-utilitarian aversion to dishonesty, so at least those in possession of independent evidence that I do have this trait will trust me here. ;-)

      Do you have thoughts on the argument that type-B physicalism is actually just property dualism[?]

      I wouldn’t say that type-B physicalism is just property dualism, but I agree with your underlying point: the rationale for rejecting dualism is lost on type-B physicalism, so insofar as there are reasons for physicalism one should embrace type-A physicalism instead. Thank you for making me see this point: I read the essay by Chalmers you quote from, but it seems I failed to grasp this insight. I won’t modify the original post because I don’t want to confuse readers, but my current credence in type-A physicalism has moved up and is now around 40% (a residual ~10% credence in type-B physicalism seems justified by the fact that many smart people believe in it and that I don’t trust my own judgment much here).

      • but my current credence in type-A physicalism has moved up and is now around 40%

        Cool beans. :)

        One other thing I noticed is that my answers were sorted in descending order of probability, so you might have implicitly guessed some of my answers based on the sort order. If we ask other people to give answers, we should randomize the order.

        • I did randomize the order. :-) I have updated the post to make this clear.

  • mdawd123

    It is very a pity to me, that I can help nothing to you. I hope, to you here will help. https://icore1316701650.zendesk.com/entries/106197276