In an ideal math paper, the abstract and intro make clear the assumptions and conclusions. So a reader who trusts the authors to avoid error can ignore the rest of the paper for the purpose of updating their beliefs. In non-ideal math papers, in contrast, readers are forced to dig deep, as key assumptions are scattered throughout the paper.
Robin Hanson, Review of ‘Semi-informative priors over AI timelines’, December 9, 2020
Most people who play commodity markets… lose their stake and quit within a year. Such markets are dominated by the minority who have managed to play and not go broke. If you believe otherwise, and know of some market where the prices are obviously wrong, I challenge you to ‘put your money where your mouth is’ and take some of that free money you believe is there for the taking. It’s easy to bad-mouth the stupid public before you have tried to beat them.
Robin Hanson, ‘Could Gambling Save Science? Encouraging an Honest Consensus’, Social Epistemology, vol. 9, no. 1 (1995), p. 22
Consider Julian Simon, a population and natural resource optimist, who found that he could not compete for either popular or academic attention with best-selling doomsayers like Paul Ehrlich. In 1980 Simon challenged Ehrlich to bet on whether the price of five basic metals, corrected for inflation, would rise or fall over the next decade. Ehrlich accepted, and Simon won, as would almost anyone who bet in the same way in the last two centuries. This win brought Simon publicity, but mostly in the form of high-profile editorials saying ‘Yeah he won this one, but I challenge him to bet on a more meaningful indicator such as …’. In fact, however, not only won’t Ehrlich bet again, although his predictions remain unchanged, but also none of these editorial writers will actually put their money where their mouths are! In addition, the papers that published these editorials won’t publish letters from Simon accepting their challenges.
Robin Hanson, ‘Could Gambling Save Science? Encouraging an Honest Consensus’, Social Epistemology, vol. 9, no. 1 (1995), p. 8
One promising approach to institutional reform is to try to acknowledge people’s need to show off, but to divert their efforts away from wasteful activities and toward those with bigger benefits and positive externalities. For example, as long as students must show off by learning something at school, we’d rather they learned something useful (like how to handle personal finances) instead of something less useful (like Latin). As long as scholars have a need to impress people with their expertise on some topic, engineering is a more practical domain than the history of poetry. And scholars who show off via intellectual innovation seem more useful than scholars who show off via their command of some static intellectual tradition.
Kevin Simler & Robin Hanson, The Elephant in the Brain: Hidden Motives in Everyday Life, Oxford, 2018, p. 312
The line between cynicism and misanthropy—between thinking ill of human motives and thinking ill of humans—is often blurry. So we want readers to understand that although we may often be skeptical of human motives, we love human beings. (indeed, many of our best friends are human!)
Kevin Simler & Robin Hanson, The Elephant in the Brain: Hidden Motives in Everyday Life, Oxford, 2018, p. 13
Shortly after his 23rd birthday, Kevin was diagnosed with Crohn’s disease. For a while he was extremely reluctant to talk about it (except among family and close friends), a reluctance he rationalized by telling himself that he’s simply a “private person” who doesn’t like sharing private medical details with the world. Later he started following a very strict diet to treat his disease—a diet that eliminated processed foods and refined carbohydrates. Eating so healthy quickly became a point of pride, and suddenly Kevin found himself perfectly happy to share his diagnosis, since it also gave him an opportunity to brag about his diet. Being a “private person” about medical details went right out of the window—and now, look, here he is sharing his diagnosis (and diet!) with perfect strangers in this book.
Kevin Simler & Robin Hanson, The Elephant in the Brain: Hidden Motives in Everyday Life, Oxford, 2018, p. 104
An em might be fooled not only by deceptive inputs about its environment, but also by misleading information about its copy history. If many copies were made of an em and then only a few selected according to some criteria, then knowing about such selection criteria is valuable information to those selected ems. For example, imagine that someone created 10 000 copies of an em, exposed each copy to different arguments in favor of committing some act of sabotage, and then allowed only the most persuaded copy to continue. This strategy might in effect persuade this em to commit the sabotage. However, if the em knew this fact about its copy history, that could convince this remaining copy to greatly reduce its willingness to commit the sabotage.
Robin Hanson, The Age of Em: Work, Love, and Life when Robots Rule the Earth, Oxford, 2016, p. 112
Today, we take far more effort to study the past than the future, even though we can’t change the past.
Robin Hanson, The Age of Em: Work, Love, and Life when Robots Rule the Earth, Oxford, 2016, p. 31
Clearly, Eliezer should seriously consider devoting himself more to writing fiction. But it is not clear to me how this helps us overcome biases any more than any fictional moral dilemma. Since people are inconsistent but reluctant to admit that fact, their moral beliefs can be influenced by which moral dilemmas they consider in what order, especially when written by a good writer. I expect Eliezer chose his dilemmas in order to move readers toward his preferred moral beliefs, but why should I expect those are better moral beliefs than those of all the other authors of fictional moral dilemmas? If I’m going to read a literature that might influence my moral beliefs, I’d rather read professional philosophers and other academics making more explicit arguments. In general, I better trust explicit academic argument over implicit fictional “argument.”
Robin Hanson, comment to Eliezer Yudkowsky, Epilogue: Atonement, LessWrong, February 6, 2009
In each case where X is commonly said to be about Y, but is really X is more about Z, many are well aware of this but say we are better off pretending X is about Y. You may be called a cynic to say so, but if honesty is important to you, join me in calling a spade a spade.
Robin Hanson, ‘Politics isn’t about Policy‘, Overcoming Bias, September 21, 2008
[A]s a blog author, while I realize that blog posts can be part of a balanced intellectual diet, I worry that I tempt readers to fill their intellectual diet with too much of the fashionably new, relative to the old and intellectually nutritious. Until you reach the state of the art, and are ready to be at the very forefront of advancing human knowledge, most of what you should read to get to that forefront isn’t today’s news, or even today’s blogger musings. Read classic books and articles, textbooks, review articles. Then maybe read focused publications (including perhaps some blog posts) on your chosen focus topic(s).
Robin Hanson, Read a Classic, Overcoming Bias, June 28, 2010
If you want outsiders to believe you, then you don’t get to choose their rationality standard. The question is what should rational outsiders believe, given the evidence available to them, and their limited attention. Ask yourself carefully: if most contrarians are wrong, why should they believe your cause is different?
Robin Hanson, ‘Contrarian Excuses’, Overcoming Bias, November 15, 2009
The future is not the realization of our hopes and dreams, a warning to mend our ways, an adventure to inspire us, nor a romance to touch our hearts. The future is just another place in spacetime.
Robin Hanson, ‘The Rapacious Hardscrapple Frontier’, in Damien Broderick (ed.), Year Million: Science at the Far Edge of Knowledge, New York, 2008, p. 168
Without some basis for believing that the process that produced your prior was substantially better at tracking truth than the process that produced other peoples’ priors, you appear to have no basis for believing that beliefs based on your prior are more accurate than beliefs based on other peoples’ priors.
Robin Hanson, ‘Uncommon Priors Require Origin Disputes’, Theory and Decision, vol. 61, no. 4 (December, 2006), p. 326
When large regions of one’s data are suspect and for that reason given less credence, even complex curves will tend to look simpler as they are interpolated across such suspect regions. In general, the more error one expects in one’s intuitions (one’s data, in the curve-fitting context), the more one prefers simpler moral principles (one’s curves) that are less context-dependent. This might, but need not, tip the balance of reflective equilibrium so much that we adopt very simple and general moral principles, such as utilitarianism. This might not be appealing, but if we really distrust some broad set of our moral intuitions, this may be the best that we can do.
Robin Hanson, ‘Why Health is not Special: Errors in Evolved Bioethics Intuitions’, Social Philosophy & Policy, vol. 19, no. 2 (Summer, 2002), p. 179
Apparently, beliefs are like clothes. In a harsh environment, we choose our clothes mainly to be functional, i.e., to keep us safe and comfortable. But when the weather is mild, we choose our clothes mainly for their appearance, i.e., to show our figure, our creativity, and our allegiances. Similarly, when the stakes are high we may mainly want accurate beliefs to help us make good decisions. But when a belief has few direct personal consequences, we in effect mainly care about the image it helps us to project.
Robin Hanson, ‘Enhancing Out Truth Orientation’, in Nick Bostrom & Julian Savulescu (eds.), Human Enhancement, Oxford, 2009, p. 358