My beliefs, updated

Back in 2015, I published a post listing my beliefs on various propositions. This post updates that list to reflect what I currently believe. The new table also has a new column, indicating the resilience of each belief, defined as the likelihood that my credences will change if I thought more about the topic.

Note that, although the credences stated in the 2015 post are outdated, the substantive comments included there still largely reflect my current thinking. Accordingly, you may still want to check out that post if you are curious about why I hold these beliefs to the degree that I do.

PropositionCredenceResilience
“Aesthetic value: objective or subjective?” Answer: subjective100%high
Artificial general intelligence (AGI) is possible in principle95%highish
Compatibilism on free will10%highish
“Abstract objects: Platonism or nominalism?” Answer: nominalism95%highish
Moral anti-realism30%medium
Humans will eventually build human-level AGI conditional on no other major intervening disruptions to civilization as we know it85%highish
We live in at least a Level I multiverse30%low
Type-A physicalism regarding consciousness10%medium
Eternalism on philosophy of time95%medium
Earth will eventually be controlled by a singleton of some sort60%medium
Human-inspired colonization of space will cause net suffering if it happens10%highish
Many worlds interpretation of quantum mechanics (or close kin)55%lowish
Soft AGI takeoff60%lowish
By at least 10 years before human-level AGI is built, debate about AGI risk will be as mainstream as global warming is in 201555%medium
Human-controlled AGI would result in less expected suffering than uncontrolled, as judged by the beliefs I would hold if I thought about the problem for another 10 years5%highish
A government will build the first human-level AGI, assuming humans build one at all35%lowish
Climate change will cause net suffering55%lowish
By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past70%medium
The effective-altruism movement, all things considered, reduces rather than increases suffering in the far future [NB: I think EA very likely reduces net suffering in expectation; see comments here.]35%medium
Electing more liberal politicians reduces net suffering in the far future55%lowish
Faster technological innovation increases net suffering in the far future50%medium
“Science: scientific realism or scientific anti-realism?” Answer: realism95%highish
At bottom, physics is digital15%low
Cognitive closure of some philosophical problems80%medium
Rare Earth explanation of Fermi Paradox10%lowish
Crop cultivation prevents net suffering50%medium
Conditional on a government building the first human-level AGI, it will be the USA (rather than China, etc.)45%medium
Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments)15%medium
Faster economic growth will cause net suffering in the far future50%medium
Whole brain emulation will come before de novo AGI, assuming both are possible to build40%medium
Modal realism20%lowish
The multiverse is finite70%low
A world government will develop before human-level AGI10%highish
Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing15%medium
Humans will go extinct within millions of years for some reason other than AGI40%medium
A design very close to CEV will be implemented in humanity’s AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals)5%medium
Negative utilitarianism focused on extreme suffering1%highish
Ethical pluralism for other values (happiness, love, friendship, knowledge, accomplishment, diversity, paperclips, and other things that agents care about)0%high
hedonic experience is most valuable95%high
preference frustration is most valuable3%high

Update (24 March 2022): Jacy Reese Anthis has posted a similar list.