When Lyndon Johnson came to believe in something, […] he came to believe in it totally, with absolute conviction, regardless of previous beliefs, or of the facts in the matter, came to believe in it so absolutely that, George Reedy says, “I believe that he acted out of pure motives regardless of their origins. He had a remarkable capacity to convince himself that he held the principles he should hold at any given time, and there was something charming about the air of injured innocence with which he would treat anyone who brought forth evidence that he had held other views in the past. It was not an act…. He had a fantastic capacity to persuade himself that the ‘truth’ which was convenient for the present was the truth and anything that conflicted with it was the prevarication of enemies. He literally willed what was in his mind to become reality.”
Robert Caro, Master of the Senate: The Years of Lyndon Johnson, New York, 2002, ch. 37
[W]henever we heard an unflattering portrait of our own side our first question to ourselves was not “Is this true?” but “What are they trying to hide about themselves by accusing us of this?” Once this mental defense system had been perfected, few criticisms could hit home.
Markus Wolf, Man Without a Face: The Autobiography of Communism’s Greatest Spymaster, New York, 1999, p. 40
The modular view is really, really different from the view of the mind that many really, really smart people seem to have of it. Many people, in particular philosophers, think of the mind as unitary. For this reason, they worry a lot about contradictions within the mind. And, really, they can get themselves into a complete tizzy about this. In Self and Deception: A Cross-Cultural Philosophical Enquiry, a whole bunch of philosophers worry a lot about this problem, so much so that you can almost sense them collectively wringing their hands. In one chapter dramatically called “On the Very Possibility of Self-Deception,” the author discusses two subsystems, which he denotes S1 and S2, in the brain of a person. What if S1 believes one thing, but S2 believes another? This can’t possibly be. Why? Because “the person cannot, of course, be both S1 and S2.”
I love this, especially the “of course.”
Robert Kurzban, Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind, Princeton, 2012, p. 67
The observant reader may feel at this point that structured procrastination requires a certain amount of self-deception, because one is in effect constantly perpetrating a pyramid scheme on oneself. Exactly. One needs to be able to recognize and commit oneself to tasks with inflated importance and unreal deadlines, while making oneself feel that these tasks are important and urgent. This is not a problem, because virtually all procrastinators have excellent self-deception skills. And what could be more noble than using one character flaw to offset the negative effects of another?
John Perry, The Art of Procrastination: A Guide to Effective Dawdling, Dallying, Lollygagging, and Postponing, New York, 2012, p. 7
The evidence is clear and overwhelming that both the detection of deception and often its propagation have been major forces favoring the evolution of intelligence. It is perhaps ironic that dishonesty has often been the file against which intellectual tool for truth have been sharpened.
Robert Trivers, The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life, New York, 2011, p. 5
Lyndon Johnson was a master of self-justification. According to his biographer Robert Caro, when Johnson came to believe in something, he would believe in it “totally, with absolute conviction, regardless of previous beliefs, or of the facts in the matter.” George Reedy, one of Johnson’s aides, said that he “had a remarkable capacity to convince himself that he held the principles he should hold at any given time, and there was something charming about the air of injured innocence with which he would treat anyone who brought forth evidence that he had held other views in the past. It was not an act… He had a fantastic capacity to persuade himself that the ‘truth’ which was convenient for the present was the truth and anything that conflicted with it was the prevarication of enemies. He literally willed what was in his mind to become reality.” Although Johnson’s supporters found this to be a rather charming aspect of the man’s character, it might well have been one of the major reasons that Johnson could not extricate the country from the quagmire of Vietnam. A president who justifies his actions only to the public might be induced to change them. A president who has justified his actions to himself, believing that he has the truth, becomes impervious to self-correction.
Carol Tavris & Elliot Aronson, Mistakes Were Made (But Not By Me): Why We Justify Foolish Bbeliefs, Bad Decisions, and Hurtful Acts, Orlando, Florida, 2007, p. 7
No one thinks that they make bad jokes, but everyone knows some people that do, so there’s an obvious disconnect. Some people consistently make bad jokes, and don’t realize it. You might be one of these.
Tynan, Superhuman Social Skills: a Guide to Being Likeable, Winning Friends, and Building Your Social Circle, 2015
To justify a policy to which one is attached on self-interested or ideological grounds, one can shop around for a causal or statistical model just as one can shop around for a principle. Once it has been found, one can reverse the sequence and present the policy as the conclusion. This process can occur anywhere on the continuum between deception and self-deception (or wishful thinking), usually no doubt closer to the latter.
Jon Elster, Securities Against Misrule: Juries, Assemblies, Elections, Cambridge, 2013, p. 5
Intense focusing on a task can make people effectively blind, even to stimuli that normally attract attention. The most dramatic demonstration was offered by Christopher Chabris and Daniel Simons in their book The Invisible Gorilla. They constructed a short film of two teams passing basketballs, one team wearing white shirts, the other wearing black. The viewers of the film are instructed to count the number of passes made by the white team, ignoring the black players. This task is difficult and completely absorbing. Halfway through the video, a woman wearing a gorilla suit appears, crosses the court, thumps her chest, and moves on. The gorilla is in view for 9 seconds. Many thousands of people have seen the video, and about half of them do not notice anything unusual. It is the counting task—and especially the instruction to ignore one of the teams—that causes the blindness. No one who watches the video without that task would miss the gorilla. Seeing and orienting are automatic functions of System 1, but they depend on the allocation of some attention to the relevant stimulus. The authors note that the most remarkable observation of their study is that people find its results very surprising. Indeed, the viewers who fail to see the gorilla are initially sure that it was not there—they cannot imagine missing such a striking event. The gorilla study illustrates two important facts about our minds: we can be blind to the obvious, and we are also blind to our blindness.
Daniel Kahneman, Thinking, Fast and Slow, New York, 2011, pp. 23-24
Researchers have spent a great deal of time looking at the link between people’s scores on these types of questionnaires and happiness. The findings are as consistent as they are worrying –high scores tend to be associated with feeling unhappy and unsatisfied with life. Of course, this is not the case with every single materialist and so, if you did get a high score, you might be one of the happy-go-lucky people who buck the trend. (However, before assuming this, do bear in mind that research also suggests that whenever we are confronted with negative results from tests, we are exceptionally good at convincing ourselves that we are an exception to the rule.)
Richard Wiseman, 59 Seconds: Think a Little, Change a Lot, London, 2009, pp. 25-26
There are many different techniques for collecting, interpreting, and analyzing facts, and different techniques often lead to different conclusions, which is why scientists disagree about the dangers of global warming, the benefits of supply-side economics, and the wisdom of low-carbohydrate diets. Good scientists deal with this complication by choosing the techniques they consider most appropriate and then accepting the conclusions that these techniques produce, regardless of what those conclusions might be. But bad scientists take advantage of this complication by choosing techniques that are especially likely to produce the conclusions they favour, thus allowing them to reach favoured conclusions by way of supportive facts. Decades of research suggests that when it comes to collecting and analyzing facts about ourselves and our experiences, most of us have the equivalent of an advanced degree in Really Bad Science.
Daniel Gilbert, Stumbling on Happiness, New York, 2005, p. 164
It seems easy enough to “prove” the impossibility of self-deception, but self-deception is nonetheless a pervasive psychological phenomenon, and therefore there must be something wrong with the proof.
John Searle, The Rediscovery of the Mind, Cambridge, Massachusetts, 1992, p. 147
Are the conclusions true? Before I address this issue, I want to observe that it is not clear that they are always intended to be true, that is, to correspond to the actual world. Rather, they sometimes represent a form of science fiction—an analysis of the action and interaction of ideally rational agents, who have never existed and never will. The analysis of ever-more-refined forms of strategic equilibria, for instance, is hardly motivated by a desire to explain or predict the behaviour of actual individuals. Rather, the motivation seems to be an aesthetic one. Two of the most accomplished equilibria theorists, Reinhart Selten and Ariel Rubinstein, have made it quite clear that they do not believe their models have anything to say about the real world. When addressing the workings of the latter, they use some variety of behavioural economics or bounded rationality. To cite another example, social choice theory—the axiomatic study of voting mechanisms—became at one point so mathematically convoluted and so obviously irrelevant to the study of actual politics that one of the most prominent journals in economics, Econometrica, imposed a moratorium on articles in this area.
An interesting question in the psychology and sociology of science is how many secret practitioners there are of economic science fiction—hiding either from themselves or from others the fact that this is indeed what they are practicing. Inventing ingenious mathematical models is a well-paid activity, but except for the likes of Selten and Rubinstein payment will be forthcoming only if the activity can also be claimed to be relevant; hence the incentive for either self-deception or deception. To raise this question might seem out of bounds for academic discourse, but I do not see why it should be. Beyond a certain point, academic norms of politeness ought to be discarded.
Jon Elster, Explaining Social Behavior: More Nuts and Bolts for the Social Sciences, Cambridge, 2007, p. 461