Title: Righteous Mind: Why People Are Divided by Politics and Reason
Author: Jonathan Haidt
Scope: 4 stars
Readability: 4 stars
My personal rating: 5 stars
See more on my book rating system.
If you enjoy this summary, please support the author by buying the book.
Topic of Book
Haidt explores the role that biology and morality play in systems of thought, such as politics and religion.
- Humans are moral animals. We all want to believe that we are moral. More importantly, we want others to believe that we are moral so that we accepted by a larger group.
- The human mind is based on moral reasoning, not rationality.
- Because common moral values pull a group together, humans evolved moral reasoning to enable cooperation within groups that are competing with other groups to survive.
- Our brain is like elephant (intuition) and a rider (conscious reasoning). The elephant is in control and the rider functions as a press secretary whose job is to invent rational arguments to justify the elephant’s actions.
- We are all self-righteous hypocrites.
Important Quotes from Book
The human mind is designed to “do” morality, just as it’s designed to do language, sexuality, music.
Our righteous minds made it possible for human beings—but no other animals—to produce large cooperative groups, tribes, and nations without the glue of kinship. But at the same time, our righteous minds guarantee that our cooperative groups will always be cursed by principle of moral psychology.
The first principle: Intuitions come first, strategic reasoning second. Moral intuitions arise automatically and almost instantaneously, long before moral reasoning has a chance to get started, and those first intuitions tend to drive our later reasoning. Ifyou think that moral reasoning is something we do to figure out the truth, you’ll be constantly frustrated by how foolish, biased, and illogical people become when they disagree with you. But if you think about moral reasoning as a skill we humans evolved to further our social agendas—to justify our own actions and to defend the teams we belong to—then things will make a lot more sense. Keep your eyeon the intuitions, and don’t take people’s moral arguments at face value. They’re mostly post hoc constructions made up on the fly,crafted to advance one or more strategic objectives.
The central metaphor of these four chapters is that the mind is divided, like a rider on an elephant, and the rider’s job is to serve the elephant. The rider is our conscious reasoning—the stream of wordsand images of which we are fully aware. The elephant is the other 99percent of mental processes—the ones that occur outside of awarenessbut that actually govern most of our behavior.
The second principle of moral psychology, which is that there’s more to morality than harm and fairness. The central metaphor of these four chapters is that the righteous mind is like a tongue with six taste receptors. Secular Western moralities are like cuisines that try to activate just one or two of these receptors—either concerns about harm and suffering, or concerns about fairness and injustice. But people have so many other powerful moral intuitions, such as those related to liberty, loyalty, authority, and sanctity.
The third principle: Morality binds and blinds. The central metaphor of these four chapters is that human beings are 90 percent chimp and 10 percent bee. Human nature was produced by natural selection working at two levels simultaneously. Individuals compete with individuals within every group, and we are the descendants of primates who excelled at that competition. This gives us the ugly side of our nature, the one that is usually featured in books about our evolutionary origins. We are indeed selfish hypocrites so skilled at putting on a show of virtue that we fool even ourselves.
But human nature was also shaped as groups competed with other groups.
Once you see our righteous minds as primate minds with a hivish overlay, you get a whole new perspective on morality, politics, and religion. I’ll show that our “higher nature” allows us to be profoundly altruistic, but that altruism is mostly aimed at members of our own groups. I’ll show that religion is (probably) an evolutionary adaptation for binding groups together and helping them to create communities with a shared morality.
We are all self-righteous hypocrites.
Western philosophy has been worshipping reason and distrusting the passions for thousands of years. There’s a direct line running from Plato through Immanuel Kant to Lawrence Kohlberg. I’ll refer to this worshipful attitude throughout this book as the rationalist delusion. I call it a delusion because when a group of people make something sacred, the members of the cult lose the ability to think clearly about it. Morality binds and blinds.
We do moral reasoning not to reconstruct the actual reasons why we ourselves came to a judgment; we reason to find the best possible reasons why somebody else ought to join us in our judgment.
Automatic processes run the human mind, just as they have been running animal minds for 500 million years, so they’re very good at what they do, like software that has been improved through thousands of product cycles. When human beings evolved the capacity for language and reasoning at some point in the last million years, the brain did not rewire itself to hand over the reins to a new and inexperienced charioteer. Rather, the rider (language-based reasoning) evolved because it did something useful for the elephant.
The rider can do several useful things. It can see further into the future (because we can examine alternative scenarios in our heads) and therefore it can help the elephant make better decisions in the present. It can learn new skills and master new technologies, which can be deployed to help the elephant reach its goals and sidestep disasters. And, most important, the rider acts as the spokesman for the elephant, even though it doesn’t necessarily know what the elephant is really thinking. The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. Once human beings developed language and began to use it to gossip about each other, it became extremely valuable for elephants to carry around on their backs a full-time public relations firm.
The social intuitionist model. Intuitions come first and reasoning is usually produced after a judgment is made, in order to influence other people. But as a discussion progresses, the reasons given by other people sometimes change our intuitions and judgments.
The social intuitionist model offers an explanation of why moral and political arguments are so frustrating: because moral reasons are the tailwagged by the intuitive dog. A dog’s tail wags to communicate. Youcan’t make a dog happy by forcibly wagging its tail. And you can’t change people’s minds by utterly refuting their arguments.
When does the elephant listen to reason? The main way that we change our minds on moral issues is by interacting with other people. We are terrible at seeking evidence that challenges our own beliefs, but other people do us this favor, just as we are quite good at finding errors in other people’s beliefs. When discussions are hostile, the odds of change are slight. The elephant leans away from the opponent, and the rider works frantically to rebut the opponent’s charges.
But if there is affection, admiration, or a desire to please the other person, then the elephant leans toward that person and the rider tries to find the truth in the other person’s arguments. The elephant may not often change its direction in response to objections from its own rider, but it is easily steered by the mere presence of friendly elephants (that’s the social persuasion link in the social intuitionist model) or by good arguments given to it by the riders of those friendly elephants (that’s the reasoned persuasion link).
There are even times when we change our minds on our own, with no help from other people.
In other words, under normal circumstances the rider takes its cue from the elephant, just as a lawyer takes instructions from a client.
But if you force the two to sit around and chat for a few minutes, the elephant actually opens up to advice from the rider and arguments from outside sources. Intuitions come first, and under normal circumstances they cause us to engage in socially strategic reasoning, but there are ways to make the relationship more of a two-way street.
The most important principle for designing an ethical society is to make sure that everyone’s reputation is on the line all the time, so that bad behavior will always bring bad consequences.
Human beings are the world champions of cooperation beyond kinship, and we do it in large part by creating systems of formal and informal accountability. We’re really good at holding others accountable for their actions, and we’re really skilled at navigating through a world in which others hold us accountable for our own.
Accountability increases exploratory thought only when three conditions apply: (1) decision makers learn before forming any opinion that they will be accountable to an audience, (2) the audience’s views are unknown, and (3) they believe the audience is well informed and interested in accuracy.
When all three conditions apply, people do their darnedest to figure out the truth, because that’s what the audience wants to hear. But the rest of the time—which is almost all of the time—accountability pressures simply increase confirmatory thought. People are trying harder to look right than to be right.
Tetlock concludes that conscious reasoning is carried out largely for the purpose of persuasion, rather than discovery. But Tetlock adds that we are also trying to persuade ourselves. We want to believe the things we are about to say to others.
That’s one of the rider’s main jobs: to be the full-time in-house press secretary for the elephant.
People care about their groups, whether those be racial, regional, religious, or political. The political scientist Don Kinder summarizes the findings like this: “In matters of public opinion, citizens seem to be asking themselves not ‘What’s in it for me?’ but rather ‘What’s in it for my group?’ ” Political opinions function as “badges of social membership.”
Most of the bizarre and depressing research findings make perfect sense once you see reasoning as having evolved not to help us find truth but to help us engage in arguments, persuasion, and manipulation in the context of discussions with other people.
My goal is to show you that morality is the key to understanding humanity.
Human beings are conditional hive creatures. We have the ability (under special conditions) totranscend self-interest and lose ourselves (temporarily andecstatically) in something larger than ourselves. That ability is whatI’m calling the hive switch. The hive switch, I propose, is a group-related adaptation that can only be explained “by a theory of between-group selection,”.
The hive switch may be more of a slider switch than an on-off switch, and with a few institutional changes you can create environments that will nudge everyone’s sliders a bit closer to the hive position. For example:
• Increase similarity, not diversity. To make a human hive, you want to make everyone feel like a family. So don’t call attention to racial and ethnic differences; make them less relevant by ramping up similarity and celebrating the group’s shared values and common identity.
• Create healthy competition among teams, not individuals…
Studies show that intergroup competition increases love of the in-group far more than it increases dislike of the out-group… But pitting individuals against each other in a competition for scarce resources (such as bonuses) will destroy hivishness, trust, and morale.
Happiness comes from between. It comes from getting the right relationships between yourself and others, yourself and your work, and yourself and something larger than yourself.
This book explained why people are divided by politics and religion. The answer is not, as Manichaeans would have it, because some people are good and others are evil. Instead, the explanation is that our minds were designed for groupish righteousness. We are deeply intuitive creatures whose gut feelings drive our strategic reasoning. This makes it difficult—but not impossible—to connect with those who live in other matrices, which are often built on different configurations of the available moral foundations.