In some parts of the Brazilian Amazon, indigenous groups still practice infanticide. Children are killed for being disabled, for being twins, or for being born to single mothers. This is undoubtedly a piece of cultural technology that existed to optimize resource distribution under harsh conditions.
Infanticide can be legally practiced because these tribes aren’t bound by Brazilian law. Under legislation, indigenous tribes are bound by the laws in proportion to how much they interact with the state. Remote Amazonian groups have a waiver from all Brazilian laws.
Reformers, led mostly by disabled indigenous people who’ve escaped infanticide and evangelicals, are trying to change this. They are pushing for a law that will outlaw infanticide, register pregnancies and birth outcomes, and punish people who don’t report infanticide.
Now I know that I have in the past written about using the outside view in cases like these. Historically, outsiders deciding they know what is best for indigenous people has not ended particularly well. In general, this argues for avoiding meddling in cases like this. Despite that, if I lived in Brazil, I would support this law.
When thinking about public policies, it’s important to think about the precedents they set. Opposing a policy like this, even when you have very good reasons, sends a message to the vast majority of the population, a population that views infanticide as wrong (and not just wrong, but a special evil). It says: “we don’t care about what is right or wrong, we’re moral relativists who think anything goes if it’s someone’s culture.”
There are several things to unpack here. First, there are the direct effects on the credibility of the people defending infanticide. When you’re advocating for something that most people view as clearly wrong, something so beyond the pale that you have no realistic chance of ever convincing anyone, you’re going to see some resistance to the next issue you take up, even if it isn’t beyond the pale. If the same academics defending infanticide turn around and try and convince people to accept human rights for trans people, they’ll find themselves with limited credibility.
Critically, this doesn’t happen with a cause where it’s actually possible to convince people that you are standing up for what is right. Gay rights campaigners haven’t been cut out of the general cultural conversation. On the contrary, they’ve been able to parlay some of their success and credibility from being ahead of the curve to help in related issues, like trans rights.
There’s no (non-apocalyptic) future where the people of Brazil eventually wake up okay with infanticide and laud the campaigners who stood up for it. But the people of Brazil are likely to wake up in the near future and decide they can’t ever trust the morals of academics who advocated for infanticide.
Second, it’s worth thinking about how people’s experience of justice colours their view of the government. When the government permits what is (to many) a great evil, people lose faith in the government’s ability to be just. This inhibits the government’s traditional role as solver of collective action problems.
We can actually see this manifest several ways in current North American politics, on both the right and the left.
On the left, there are many people who are justifiably mistrustful of the government, because of its historical or ongoing discrimination against them or people who look like them. This is why the government can credibly lock up white granola-crowd parents for failing to treat their children with medically approved medicines, but can’t when the parents are indigenous. It’s also why many people of colour don’t feel comfortable going to the police when they see or experience violence.
In both cases, historical injustices hamstring the government’s ability to achieve outcomes that it might otherwise be able to achieve if it had more credibly delivered justice in the past.
On the right, I suspect that some amount of skepticism of government comes from legalized abortion. The right is notoriously mistrustful of the government and I wonder if this is because it cannot believe that a government that permits abortion can do anything good. Here this hurts the government’s ability to pursue the sort of redistributive policies that would help the worst off.
In the case of abortion, the very real and pressing need for some women to access it is enough for me to view it as net positive, despite its negative effect on some people’s ability to trust the government to solve coordination problems.
Discrimination causes harms on its own and isn’t even justified on its own “merits”. It’s effect on peoples’ perceptions of justice are just another reason it should be fought against.
In the case of Brazil, we’re faced with an act that is negative (infanticide) with several plausible alternatives (e.g. adoption) that allow the cultural purpose to be served without undermining justice. While the historical record of these types of interventions in indigenous cultures should give us pause, this is counterbalanced by the real harms justice faces as long as infanticide is allowed to continue. Given this, I think the correct and utilitarian thing to do is to support the reformers’ effort to outlaw infanticide.
The Righteous Mind follows an argument structure I learned in high school debate club. It tells you what it’s going to tell you, it tells you it, then it reminds you what it told you. This made it a really easy read and a welcome break from The Origins of Totalitarianism, the other book I’ve been reading. Practically the very first part of The Righteous Mind proper (after the foreword) is an introduction to its first metaphor.
Imagine an elephant and a rider. They have travelled together since their birth and move as one. The elephant doesn’t say much (it’s an elephant), but the rider is very vocal – for example, she’s quick to apologize and explain away any damage the elephant might do. A casual observer might think the rider is in charge, because she is so much cleverer and more talkative, but that casual observer would be wrong. The rider is the press secretary for the elephant. She explains its action, but it is much bigger and stronger than her. It’s the one who is ultimately calling the shots. Sometimes she might convince it one way or the other, but in general, she’s buffeted along by it, stuck riding wherever it goes.
She wouldn’t agree with that last part though. She doesn’t want to admit that she’s not in charge, so she hides the fact that she’s mainly a press secretary even from herself. As soon as the elephant begins to move, she is already inventing a reason why it was her idea all along.
This is how Haidt views human cognition and decision making. In common terms, the elephant is our unconscious mind and the rider our consciousness. In Kahneman’s terms, the elephant is our System 1 and the rider our System 2. We may make some decisions consciously, but many of them are made below the level of our thinking.
Haidt illustrates this with an amusing anecdote. His wife asks him why he didn’t finish some dishes he’d been doing and he immediately weaves a story of their crying baby and barking incontinent dog preventing him. Only because he had his book draft open on his computer did he realize that these were lies… or rather, a creative and overly flattering version of the truth.
The baby did indeed cry and the dog indeed bark, but neither of these prevented him from doing the dishes. The cacophany happened well before that. He’d been distracted by something else, something less sympathetic. But his rider, his “internal press secretary” immediately came up with an excuse and told it, without any conscious input or intent to deceive.
We all tell these sorts of flattering lies reflexively. They take the form of slight, harmless embellishments to make our stories more flattering or interesting, or our apologies more sympathetic.
The key insight here isn’t that we’re all compulsive liars. It’s that the “I” that we like to think exists to run our life doesn’t, really. Sometimes we make decisions, especially ones the elephant doesn’t think it can handle (high stakes apologies anyone?), but normally decisions happen before we even think about them. From the perspective of Haidt, “I”, is really “we”, the elephant and its rider. And we need to be careful to give the elephant its due, even though it’s quiet.
Haidt devotes a lot of pages to an impassioned criticism of moral rationalism, the belief that morality is best understood and attained by thinking very hard about it. He explicitly mentions that to make this more engaging, he wraps it up in his own story of entering the field of moral psychology.
He starts his journey with Kohlberg, who published a famous account of the stages of moral reasoning, stages that culminate in rationally building a model of justice. This paradigm took the world of moral psychology by storm and reinforced the view (dating in Western civilization to the times of the Greeks) that right thought had to proceed right action.
Haidt was initially enamoured with Kohlberg’s taxonomy. But reading ethnographies and doing research in other countries began to make him suspect things weren’t as simple as Kohlberg thought. Haidt and others found that moral intuitions and responses to dilemmas differed by country. In particular, WEIRD people (people from countries that were Western, Educated, Industrialized, Rich, and Developed and most especially the most educated people in those countries) were very much able to tamp down feelings of disgust in moral problems, in a way that seemed far from universal.
For example, if asked if it was wrong for a family to eat their dog if it was killed by a car (and the alternative was burying it), students would say something along the lines of “well, I wouldn’t, but it’s gross, not wrong”. Participants recruited at a nearby McDonalds gave a rather different answer: “of course it’s wrong, why are you even asking”. WEIRD students at prestigious universities may have been working towards a rational, justice-focused explanation for morality, but Haidt found no evidence that this process (or even a focus on “justice”) was as universal as Kohlberg claimed.
That’s not to say that WEIRD students had no disgust response. In fact, trying to activate it gave even more interesting results. When asked to justify answers where disgust overpowered students sense of “well as long as no one was hurt” (e.g. consensual adult sibling incest with no chance of children), Haidt observed that people would throw up a variety of weak excuses, often before they had a chance to think the problem through. When confronted by the weakness of their arguments, they’d go speechless.
This made Haidt suspect that two entirely separate processes were going on. There was a fast one for deciding and a slower another for explanation. Furthermore, the slower process was often left holding the bag for the faster one. Intuitions would provide an answer, then the subject would have to explain it, no matter how logically indefensible it was.
Haidt began to believe that Kohlberg had only keyed in on the second, slower process, “the talking of the rider” in metaphor-speak. From this point of view, Kohlberg wasn’t measuring moral sophistication. He was instead measuring how fluidly people could explain their often less than logical moral intuitions.
There were two final nails in the coffin of ethical rationalism for Haidt. First, he learned of a type of brain injury that separated people from their moral intuitions (or as the rationalists might call them “passions”). Contrary to the rationalist expectation, these people’s lives went to hell, as they alienated everyone they knew, got fired from their jobs, and in general proved the unsuitability of pure reason for making many types of decisions. This is obviously the opposite of what rationalists predicted would happen.
Second, he saw research that suggested that in practical measures (like missing library books), moral philosophers were no more moral than other philosophy professors.
Abandoning rationalism brought Haidt to a sentimentalist approach to ethics. In this view, ethics stemmed from feelings about how the world ought to be. These feelings are innate, but not immutable. Haidt describes people as “prewired”, not “hardwired”. You might be “prewired” to have a strong loyalty foundation, but a series of betrayals and let downs early in life might convince you that loyalty is just a lie, told to control idealists.
Haidt also believes that our elephants are uniquely susceptible to being convinced by other people in face to face discussion. He views the mechanism here as empathy at least as much as logic. People that we trust and respect can point out our weak arguments, with our respect for them and positive feelings towards them being the main motive force for us listening to these criticisms. The metaphor with elephants kind of breaks down here, but this does seem to better describe the world as it is, so I’ll allow it.
Because of this, Haidt would admit that rationalism does have some purpose in moral reasoning, but he thinks it is ancillary and mainly used to convince other people. I’m not sure how testable making evolutionary conclusions about this is, but it does seem plausible for there to be selection pressure to make us really good at explaining ourselves and convincing others of our point of view.
As Haidt took this into account and began to survey peoples’ moral instincts, he saw that the ways in which responses differed by country and class were actually highly repeatable and seemed to gesture at underlying categories of people. After analyzing many, many survey responses, he and his collaborators came up with five (later six) moral “modules” that people have. Each moral module looks for violations of a specific class of ethical rules.
Haidt likens these modules to our taste-buds. The six moral tastes are the central metaphor of the second section of the book.
Not everyone has these taste-buds/modules in equal proportion. Looking at commonalities among respondents, Haidt found that the WEIRDer someone was, the less likely they were to have certain modules. Conservatives tended to have all modules in a fairly equal proportion, liberals tended to be lacking three. Libertarians were lacking a whopping four, which might explain why everyone tends to believe they’re the worst.
The six moral foundations are:
This is the moral foundation that makes us care about suffering and pain in others. Haidt speculates that it originally evolved in order to ensure that children (which are an enormous investment of resources for mammals and doubly so for us) got properly cared for. It was originally triggered only by the suffering or distress of our own children, but can now be triggered by anyone being hurt, as well as cute cat videos or baby seals.
An expanding set of triggers seems to be a common theme for these. I’ve personally speculated that this would perhaps be observed if the brain was wired for minimizing negative predictive error (i.e. not mistaking a scene in which there is a lion for a scene without a lion), rather than positive predictive error (i.e. not mistaking a scene without a lion for a scene with a lion). If you minimize positive predictive error, you’ll never be frightened by a shadow, but you might get eaten by a lion.
This is the moral foundation that makes us want everyone to do their fair share and makes us want to punish tax evaders or welfare cheats (depending on our political orientation). The evolutionary story given for this one is that it evolved to allow us to reap the benefits of two-way partnerships; it was an incentive against defecting.
This is the foundation that makes us rally around our politicians, community leaders, and sports teams, as well as the foundation that makes some people care more about people from their country than people in general. Haidt’s evolutionary explanation for this one is that it was supposed to ensure coherent groups.
This is the moral foundation that makes people obey their boss without talking back or avoid calling their parents by the first names. It supposedly evolved to allow us to forge beneficial relationships within hierarchies. Basically, it may have once been very useful to have people believe and obey their elders without question, (like e.g. when the elders say “don’t drink that water, it’s poisoned” no one does and this story can be passed down and keep people safe, without someone having to die every few years to prove that the water is indeed poisoned).
This is the moral foundation that makes people on the right leery of pre-marital sex and people on the left leery of “chemicals”. It shows up whenever we view our bodies as more than just our bodies and the world as more than just a collection of things, as well as whenever we feel that something makes us “spiritually” dirty.
The very plausible explanation for this one is that it evolved in response to the omnivore’s dilemma: how do we balance the desire for novel food sources with the risk they might poison us? We do it by avoiding anything that looks diseased or rotted. This became a moral foundation as we slowly began applying it to stuff beyond food – like other people. Historically, the sanctity moral framework was probably responsible for the despised status of lepers.
This moral foundation is always in tension with Authority/Subversion. It’s the foundation that makes us want to band together against and cast down anyone who is aggrandizing themselves or using their power to mistreat another.
Haidt suggests that this evolved to allow us to band together against “alpha males” and check their power. In his original surveys, it was part of Fairness/Cheating, but he found that separating it gave him much more resolving power between liberals and conservatives.
Of these six foundations, Haidt found that libertarians only had an appreciable amount of Liberty/Oppression and Fairness/Cheating and of these two, Liberty/Oppression was by far the stronger. While the other foundations did exist, they were mostly inactive and only showed up under extreme duress. For liberals, he found that they had Care/Harm, Liberty/Oppression, and Fairness/Cheating (in that order).
Conservatives in Haidt’s survey had all six moral foundations, like I said above. Care/Harm was their strongest foundation, but by having appreciable amounts of Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation, they would occasionally overrule Care/Harm in favour of one or another of these foundations.
Haidt uses these moral foundations to give an account of the “improbable” coalition between libertarians and social conservatives that closely matches the best ones to come out of political science. Basically, liberals and libertarians are descended (ideologically, if not filially) from those who embraced the enlightenment and the liberty it brought. About a hundred years ago (depending on the chronology and the country), the descendants of the enlightenment had a great schism, with some continuing to view the government as the most important threat to liberty (libertarians) and others viewing corporations as the more pressing threat (liberals). Liberals took over many auspices of the government and have been trying to use it to guarantee their version of liberty (with mixed results and many reversals) ever since.
Conservatives do not support this project of remaking society from the top down via the government. They believe that liberals want to change too many things, too quickly. Conservatives aren’t opposed to the government qua government. In fact, they’d be very congenial to a government that shared their values. But they are very hostile to a liberal, activist government (which is rightly or wrongly how conservatives view the governments of most western nations) and so team up with libertarians in the hopes of dismantling it.
It is an attraction to murder and treason which hides behind such perverted tolerance, for in a moment it can switch to a decision to liquidate not only all actual criminals but all who are “racially” predestined to commit certain crimes. Such changes take place whenever the legal and political machine is not separated from society so that social standards can penetrate into it and become political and legal rules. The seeming broad-mindedness that equates crime and vice, if allowed to establish its own code of law, will invariably prove more cruel and inhuman than laws, no matter how severe, which respect and recognize man’s independent responsibility for his behavior.
That said, it is possible for inconvenient or dangerous things to be true and their inconvenience or danger has no bearing on their truth. If Haidt saw his writings being used to justify or promote violence, he’d have a moral responsibility to decry the perpetrators. Accepting that sort of moral responsibility is, I believe, part of the responsibility that scientists who deal with sensitive topics must accept. I do not believe that this responsibility precludes publishing. I firmly believe that only right information can lead to right action, so I am on the whole grateful for Haidt’s taxonomy.
The similarities between liberals and libertarians extend beyond ethics. Both have more openness to experience and less of a threat response than conservatives. This explains why socially, liberals and libertarians have much more in common than liberals and conservatives.
The third and final section of The Righteous Mind further focuses on political tribes. Its central metaphor is that humans are “90% chimp, 10% bee”. It’s central purpose is an attempt to show how humans might have been subject to group selection and how our groupishness is important to our morality.
Haidt claims that group selection is heresy in evolutionary biology (beyond hive insects). I don’t have the evolutionary biology background to say if this is true or not, although this does match how I’ve seen it talked about online among scientifically literate authors, so I’m inclined to believe him.
Haidt walks through the arguments against group selection and shows how they are largely sensible. It is indeed ridiculous to believe that genes for altruism could be preserved in most cases. Imagine a gene that would make deer more likely to sacrifice itself for the good of the herd if it seemed that was the only way to protect the herd’s young. This gene might help more deer in the herd attain adulthood, but it would also lead to any deer who had it having fewer children. There’s certainly an advantage to the herd if some members have this gene, but there’s no advantage to the carriers and a lot of advantage to every deer in the herd who doesn’t carry it. Free-riders will outcompete sacrificers and the selfless gene will get culled from the herd.
But humans aren’t deer. We can be selfish, yes, but we often aren’t and the ways we aren’t can’t be simply explained by greedy reciprocal altruism. If you’ve ever taken some time out of your day to help a lost tourist, congratulations, you’ve been altruistic without expecting anything in return. That people regularly do take time out of their days to help lost tourists suggests there might be something going on beyond reciprocal altruism.
Humans, unlike deer, have the resources and ability to punish free riders. We expect everyone to pitch in and might exile anyone who doesn’t. When humans began to form larger and larger societies, it makes sense that the societies who could better coordinate selfless behaviour would do better than those that couldn’t. And this isn’t just in terms of military cohesion (as the evolutionary biologist Lesley Newson had to point out to Haidt). A whole bunch of little selfless acts – sharing food, babysitting, teaching – can make a society more efficient than its neighbours at “turning resources into offspring”.
A human within the framework of society is much more capable than a human outside of it. I am only able to write this and share it widely because a whole bunch of people did the grunt work of making the laptop I’m typing it on, growing the food I eat, maintaining our communication lines, etc. If I was stuck with only my own resources, I’d be carving this into the sand (or more likely, already eaten by wolves).
Therefore, it isn’t unreasonable to expect that the more successful and interdependent a society could become, the more it would be able to outcompete, whether directly or indirectly its nearby rivals and so increase the proportion of its conditionally selfless genes in the human gene pool.
Conditional selflessness is a better description of the sorts of altruism we see in humans. It’s not purely reciprocal as Dawkins might claim, but it isn’t boundless either. It’s mostly reserved for people we view as similar to us. This doesn’t need to mean racially or religiously. In my experience, a bond as simple as doing the same sport is enough to get people to readily volunteer their time for projects like digging out and repairing a cracked foundation.
The switch from selfishness to selflessly helping out our teams is called “the hive switch” by Haidt. He devotes a lot of time to exploring how we can flip it and the benefits of flipping it. I agree with him that many of the happiest and most profound moments of anyone’s life come when the switch has been activated and they’re working as part of a team.
The last few chapters are an exploration of how individualism can undermine the hive switch and several mistakes liberals make in their zeal to overturn all hierarchies. Haidt believes that societies have both social capital (the bounds of trust between people) and moral capital (the society’s ability to bind people to collective values) and worries that liberal individualism can undermine these to the point where people will be overall worse off. I’ll talk more about moral capital later in the review.
II – On Shaky Foundations
Anyone who reads The Righteous Mind might quickly realize that I left a lot of the book out of my review. There was a whole bunch of supporting evidence about how liberals and conservatives “really are” or how they differ that I have deliberately omitted.
You may have heard that psychology is currently in the midst of a “replication crisis“. Much (I’d crudely estimate somewhere between 25% and 50%) of the supporting evidence in this book has been a victim of this crisis.
Here’s what the summary of Chapter 3 looks like with the offending evidence removed:
Here’s an incomplete list of claims that didn’t replicate:
IAT tests show that we can have unconscious prejudices that affect how we make social and political judgements (1, 2, 3 critiques/failed replications). Used to buttress the elephant/rider theory of moral decisions.
Disgusting smells can make us more judgemental (failed replication source). Used as evidence that moral reasoning can be explained sometimes by external factors, is much less rational than we’d like to believe.
Babies prefer a nice puppet over a mean one, even when pre-verbal and probably lacking the context to understand what is going on (failed replication source). Used as further proof for how we are “prewired” for certain moral instincts.
People from Asian societies are better able to do relative geometry and less able to absolute geometry than westerners (failed replication source). This was used to make the individualistic morality of westerners seem inherent.
The “Lady Macbeth Effect” showed a strong relationship between physical and moral feelings of “cleanliness” (failed replication source). Used to further strengthen the elephant/rider analogy.
The proper attitude with which to view psychology studies these days is extreme scepticism. There are a series of bad incentives (it’s harder and less prestigious to publish negative findings, publishing is necessary to advance in your career) that have led to scientists in psychology (and other fields) to inadvertently and advertently publish false results. In any field in which you expect true discoveries to be rare (and I think “interesting and counter-intuitive things about the human brain fits that bill), you shouldn’t allow any individual study to influence you very much. For a full breakdown of how this can happen even when scientists check for statistical significance, I recommend reading “Why Most Published Research Findings Are False” (Ioannidis 2005).
Moral foundations theory appears to have escaped the replication crisis mostly unscathed, (as have Tverskey and Kahneman’s work on heuristics, something that made me more comfortable including the elephant/rider analogy). I think this is because moral foundations theory is primarily a descriptive theory. It grew out of a large volume of survey responses and represents clusters in those responses. It makes little in the way of concrete predictions about the world. It’s possible to quibble with the way Haidt and his collaborators drew the category boundaries. But given the sheer volume of responses they received – and the fact that they based their results not just on WEIRD individuals – it’s hard to disbelieve that they haven’t come up with a reasonable clustering of the possibility space of human values.
I will say that stripped of much of its ancillary evidence, Haidt’s attack on rationalism lost a lot of its lustre. It’s one thing to believe morality is mostly unconscious when you think that washing your hands or smelling trash can change how moral you act. It’s quite another when you know those studies were fatally flawed. The replication crisis fueled my inability to truly believe Haidt’s critique of rationality. This disbelief in turn became one of the two driving forces in my reaction to this book.
Haidt’s moral relativism around patriarchal cultures was the other.
III – Less and Less WEIRD
It’s good that Haidt looked at a variety of cultures. This is a thing few psychologists do. There’s historically been an alarming tendency to run studies on western undergraduate students, then declare “this is how people are”. This would be fine if western undergraduates were representative of people more generally, but I think that assumption was on shaky foundations even before moral foundation theory showed that morally, at least, it was entirely false.
Haidt even did some of this field work himself. He visited South America and India to run studies. In fact, he mentioned that this field work was one of the key things that made him question the validity of western individualistic morality and wary of morality that didn’t include the sanctity, loyalty, and authority foundations.
His willingness to get outside of his bubble and to learn from others is laudable.
There is one key way in which Haidt never left his bubble, a way which makes me inherently suspicious of all of his defences of the sanctity, authority, and loyalty moral foundations. Here’s him recounting his trip to India. Can you spot the fatal omission?
I was told to be stricter with my servants, and to stop thanking them for serving me. I watched people bathe in and cook with visibly polluted water that was held to be sacred. In short, I was immersed in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine.
It only took a few weeks for my dissonance to disappear, not because I was a natural anthropologist but because the normal human capacity for empathy kicked in. I liked these people who were hosting me, helping me, and teaching me. Wherever I went, people were kind to me. And when you’re grateful to people, it’s easier to adopt their perspective. My elephant leaned toward them, which made my rider search for moral arguments in their defense. Rather than automatically rejecting the men as sexist oppressors and pitying the women, children, and servants as helpless victims, I began to see a moral world in which families, not individuals, are the basic unit of society, and the members of each extended family (including its servants) are intensely interdependent. In this world, equality and personal autonomy were not sacred values. Honoring elders, gods, and guests, protecting subordinates, and fulfilling one’s role-based duties were more important.
Haidt tried out other moral systems, sure, but he tried them out from the top. Lois McMaster Bujold once had a character quip: “egalitarians adjust to aristocracies just fine, as long as they get to be the aristocrats”. I would suggest that liberals likewise find the authority framework all fine and dandy, as long as they have the authority.
Would Haidt have been able to find anything worth salvaging in the authority framework if he’d instead been a female researcher, who found herself ignored, denigrated, and sexually harassed on her research trip abroad?
It’s frustrating when Haidt is lecturing liberals on their “deficient” moral framework while simultaneously failing to grapple with the fact that he is remarkably privileged. “Can’t you see how this other society knows some moral truths [like men holding authority over woman] that we’ve lost” is much less convincing when the author of the sentence stands to lose absolutely nothing in the bargain. It’s easy to lecture others on the hard sacrifices society “must” make – and far harder to look for sacrifices that will mainly affect you personally.
It is in this regard that I found myself wondering if this might have been a more interesting book if it had been written by a woman. If the hypothetical female author were to defend the authority framework, she’d actually have to defend it, instead of hand-waving the defence with a request that we respect and understand all ethical frameworks. And if this hypothetical author found it indefensible, we would have been treated to an exploration of what to do if one of our fundamental ethical frameworks was flawed and had to be discarded. That would be an interesting conversation!
Not only that, but perhaps a female author would have given more pages to the observation that woman and children’s role in societal altruism was just as important as that of men (as child-rearing is a more reliable way to demonstrate and cash-in on groupishness than battle) have been fully explored, instead of relegated to a brief note at the end of the chapter on group selection. This perspective is genuinely new to me and I wanted to see it developed further.
Ultimately, Haidt’s defences of Authority/Subversion, Loyalty/Betrayal, and Sanctity/Degradation fell flat in the face of my Care/Harm and Liberty/Oppression focused moral compass. Scott Alexander once wrote about the need for “a solution to the time-limitedness of enlightenment that works from within the temporal perspective”. By the same token, I think Haidt fails to deliver a defence of conservatism or anything it stands for that works from within the liberal Care/Harm perspective. Insofar as his book was meant to bridge inferential gaps and political divides, this makes it a failure.
That’s a shame, because arguments that bridge this divide do exist. I’ve read some of them.
IV – What if Liberals are Wrong?
There is a principle called “Chesterton’s Fence”, which comes from the famed Catholic conservative and author G.K. Chesterton. It goes like this: if you see a fence blocking the road and cannot see the reason for it to be there, should you remove it? Chesterton said “no!”, resoundingly. He suggested you should first understand the purpose of the fence. Only then may you safely remove it.
There is a strain of careful conservatism that holds Chesterton’s fence as its dearest parable. Haidt makes brief mention of this strain of thought, but doesn’t expound on it successfully. I think it is this thought and this thought only that can offer Care/Harm focused liberals like myself a window into the redeeming features of the conservative moral frameworks.
Here’s what the argument looks like:
Many years ago, western nations had a unified moral framework. This framework supported people towards making long term decisions and acting in a pro-social manner. There are many people who want to act differently than they would if left to their own devices and this framework helped them to do that.
Liberals began to dismantle this system in the sixties. They saw hierarchies and people being unable to do the things they wanted to do, so tried to take down the whole edifice without first checking if any of it was doing anything important.
Here’s the thing. All of these trends affect well educated and well-off liberals the least. We’re safe from crime in good neighbourhoods. We overwhelming wait until stable partnerships to have children. We can afford therapists and pills to help us with any mental health issues we might have; rehab to help us kick any drug habits we pick up.
Throwing off the old moral matrix has been an unalloyed good for privilege white liberals. We get to have our cake and eat it too – we have fun, take risks, but know that we have a safety net waiting to catch us should we fall.
The conservative appeal to tradition points out that our good time might be at the expense of the poor. It asks us if our hedonistic pleasures are worth a complete breakdown in stability for people with fewer advantages that us. It asks us consider sacrificing some of these pleasures so that they might be better off. I know many liberals who might find the sacrifice of some of their freedom to be a moral necessity, if framed this way.
But even here, social conservatism has the seeds of its own undoing. I can agree that children do best when brought up by loving and committed parents who give them a lot of stability (moving around in childhood is inarguablybad for manykids). Given this, the social conservative opposition to gay marriage (despite all evidence that it doesn’t mess kids up) is baffling. The sensible positon would have been “how can we use this to make marriage cool again“, not “how long can we delay this”.
This is a running pattern with social conservatism. It conserves blindly, without giving thought to what is even worth preserving. If liberals have some things wrong, that doesn’t automatically mean that the opposite is correct. It’s disturbingly easy for people on both sides of an issue to be wrong.
I’m sure Haidt would point out that this is why we have the other frameworks. But because of who I am, I’m personally much more inclined to do things in the other direction – throw out most of the past, then re-implement whatever we find to be useful but now lacking.
V – What if Liberals Listened?
In Berkeley, California, its environs, and assorted corners of the Internet, there exists a community that calls themselves “Rationalists”. This moniker is despite the fact that they agree with Haidt as to the futility of rationalism. Epistemically, they tend to be empiricists. Ethically, non-cognitivist utilitarians. Because they are largely Americans, they tend to be politically disengaged, but if you held them at gunpoint and demanded they give you a political affiliation, they would probably either say “liberal” or “libertarian”.
The rationalist community has semi-public events that mimic many of the best parts of religious events, normally based around the solstices (although I also attended a secular Seder when I visited last year).
The rationalist community has managed to do the sort of thing Haidt despaired of: create a strong community with communal morality in a secular, non-authoritarian framework. There are communal norms (although they aren’t very normal; polyamory and vegetarianism or veganism are very common). People tend to think very hard before having children and take care ensuring that any children they have will have a good extended support structure. People live in group houses, which combats atomisation.
This is also a community that is very generous. Many of the early adherents of Effective Altruism were drawn from the rationalist community. It’s likely that rationalists donate to charity in amounts more similar to Mormons than atheists (with the added benefit of almost all of this money going to saving lives, rather than proselytizing).
No community is perfect. This is a community made up of people. It has its fair share of foibles and megalomanias, bad actors and jerks. But it represents something of a counterpoint to Haidt’s arguments about the “deficiency” of a limited framework morality.
Furthermore, its altruism isn’t limited in scope, the way Haidt believes all communal altruism must necessarily be. Rationalists encourage each other to give to causes like malaria eradication (which mainly helps people in Africa), or AI risk (which mainly helps future people). Because there are few cost effective local opportunities to do good (for North Americans), this global focus allows for more lives to be saved or improved per dollar spent.
This is all of it, I think, the natural result of thoughtful people throwing away most cultural traditions and vestiges of traditionalist morality, then seeing what breaks and fixing those things in particular. It’s an example of what I wished for at the end of the last section applied to the real world.
VI – Is or Ought?
I hate to bring up the Hegelian dialectic, but I feel like this book fits neatly into it. We had the thesis: “morality stems from rationality” that was so popular in western political thought. Now we have the antithesis: “morality and rationality are separate horses, with rationality subordinate – and this is right and proper”.
I can’t wait for someone other than Haidt to a write a synthesis; a view that rejects rationalism as the basis of human morality but grapples with the fact that we yearn for perfection.
Haidt, in the words of Joseph Heath, thinks that moral discourse is “essentially confabulatory”, consisting only of made up stories that justify our moral impulses. There may be many ways in which this is true, but it doesn’t account for the fact that some people read Peter Singer’s essay “Famine, Affluence, and Morality” and go donate much of their money to the global poor. It doesn’t account for all those who have listened to the Sermon on the Mount and then abandoned their possessions to live a monastic life.
I don’t care whether you believe in The Absolute, or God, or Allah, or The Cycle of Rebirth, or the World Soul, or The Truth, or nothing at all. You probably have felt that very human yearning to be better. To do better. You’ve probably believed that there is a Good and it can perhaps be comprehended and reached. Maybe this is the last vestiges of my atrophied sanctity foundation talking, but there’s something base about believing that morality is solely a happy accident of how we evolved.
The is/ought fallacy occurs when we take what “is” and decide it is what “ought” to be. If you observe that murder is part of the natural order and conclude that it is therefore moral, you have committed this fallacy.
Haidt has observed the instincts that build towards human morality. His contributions to this field have helped make many things clear and make many conflicts more understandable. But in deciding that these natural tastes are the be-all and end-all of human morality, by putting them ahead of reason, religion, and every philosophical tradition, he has committed this fundamental error.
At the start of the Righteous Mind, Haidt approvingly mentions those scientists who once thought that ethics could be taken away from philosophers and studied instead only by them.
But science can only ever tell us what is, never what ought to be. As a book about science, The Righteous Mind is a success. But as a work on ethics, as an expression of how we ought to behave, it is an abysmal failure.
In this area, the philosophers deserve to keep their monopoly a little longer.
The nagging question that bothhalves of Utilitarianism for and against left me with is: “can utilitarianism exist without veering off into total assessment?”
Total assessment is the direct comparison of all the consequences of different actions. It is not so much a prediction that an individual can make as it is the providence of an omniscient god. If you cannot perfectly predict all of the future, you cannot perform a total assessment. It’s conceptually useful – whenever a utilitarian is backed into a corner, they can fall on total assessment as their decision-making tool – but it’s practically useless.
Absent total assessment, utilitarians kind of have to make their best guess and go with it. Even my beloved precedent utilitarianism isn’t much help here; precedent utilitarianism focuses on a class of consequences that traditional utilitarianism can miss. It does little to help an individual figure out all of the consequences of their actions.
If it is hard to guess the effects of outcomes, or if this guessing will be prohibitive in terms of time, what is the utilitarian to do? One appealing option is a distinctly utilitarian virtue ethics. This virtue ethics would define a good life as one lived with the virtues that cause you to make optimific decisions.
I think it is possible for such a system to maintain a distinctly utilitarian character and thereby avoid Williams’ prediction that utilitarianism must, if accepted, “usher itself from the scene.”
The first distinct characteristic of a utilitarian virtue ethics would be its heterogeneity. Classical virtue ethics holds that there are a set of virtues that can cause one to live a good life. The utilitarian would instead seek to cultivate the virtues that would cause her to act in an optimific way. These would necessarily be individualized; it may very well be optimific for an ambitious and clever utilitarian to cultivate greed and drive while acquiring a fortune, then cultivate charity while giving it away (see Bill Gates).
There is the obvious danger here that cultivating temporarily anti-utilitarian virtues could lead to permanent values drift. The best countermeasure against this would be a varied community of utilitarians, who would cultivate a variety of virtues and help bind each other to the shared utilitarian cause, helping whenever expediency threatens to pull one away from it.
Next, a utilitarian virtue ethics would treat no virtue as sacred. Honesty, charity, kindness, and bravery – all of these must be conditional on the best outcome. Because the best outcome is hard to determine, they might be good rules of thumb, but the utilitarian must always be prepared to break a moral rule if there is more utility to be had.
Third, the utilitarian would seek to avoid cognitive biases and learn to make decisions quickly. Avoiding cognitive biases increases the chance that rules of thumb will be broken out of genuine utilitarian concern, rather than thinly veiled self-interest. Learning to make decisions quickly helps avoid the wasted time pondering “what is the right thing to do?”
While the traditional virtue ethicist might read the works of the great classical philosophers to better understand virtue, a utilitarian virtue ethicist would focus on learning Fermi estimation, Bayesian statistics, and the works of Daniel Kahneman.
The easiest ways for a utilitarian to fail is to treat the world as it really is are by ignoring the things they cannot measure, or by ignoring truths they find personally uncomfortable. We did not evolve for clear thinking and there is always the risk that we will get ourselves turned around, substituting what is best for us with what is best for the world.
One hang-up I have with this idea is that I just described a bunch of my friends in the rationality and effective altruism communities. How likely is it that this is merely self-serving, instead of the natural endpoint of all of the utilitarian philosophy I’ve been reading?
On one hand, this is a community of utilitarians who are similar to me, so convergence in outputs given the same inputs is more or less expected.
On the other, this could be a classic example of seeing the world how I wish it, rather than it is. “Go hang out with people you already like, doing the things you were already going to do” isn’t much of an ethical ask. Given that the world is in a dire state, it makes sense for utilitarians to be sceptical that their ethical system won’t require much from them.
There could be other problems with this proposal, but I’m not sure that I’m the type of person who could see them. For now, this represents my best attempt to reconcile my utilitarian ethics with the realities of the modern world. But I will be careful. Ease is ever seductive.
The author is one Sir Bernard Williams. According to his Wikipedia, he was a particularly humanistic philosopher in the old Greek mode. He was skeptical of attempts to build an analytical foundation for moral philosophy and of his own prowess in arguments. It seems that he had something pithy or cutting to say about everything, which made him notably cautious of pithy or clever answers. He’s also described as a proto-feminist, although you wouldn’t know it from his writing.
Williams didn’t write his essay out of a rationalist desire to disprove utilitarianism with pure reason (a concept he seemed every bit as sceptical of as Smart was). Instead, Williams wrote this essay because he agrees with Smart that utilitarianism is a “distinctive way of looking at human action and morality”. It’s just that unlike Smart, Williams finds the specific distinctive perspective of utilitarianism often horrible.
Smart anticipated this sort of reaction to his essay. He himself despaired of finding a single ethical system that could please anyone, or even please a single person in all their varied moods.
One of the very first things I noticed in Williams’ essay was the challenge of attacking utilitarianism on its own terms. To convince a principled utilitarian that utilitarianism is a poor choice of ethical system, it is almost always necessary to appeal to the consequences of utilitarianism. This forces any critic to frame their arguments a certain way, a way which might feel unnatural. Or repugnant.
Williams begins his essay proper with (appropriately) a discussion of consequences. He points out that it is difficult to hold actions as valuable purely by their consequences because this forces us to draw arbitrary lines in time and declare the state of the world at that time the “consequences”. After all, consequences continue to unfold forever (or at least, until the heat death of the universe). To have anything to talk about at all Williams decides that it is not quite consequences that consequentialism cares about, but states of affairs.
Utilitarianism is the form of consequentialism that has happiness as its sole important value and seeks to bring about the state of affairs with the most happiness. I like how Williams undid the begging the question that utilitarianism commonly does. He essentially asks ‘why should happiness be the only thing we treat as intrinsically valuable?’ Williams mercifully didn’t drive this home, but I was still left with uncomfortable questions for myself.
Instead he moves on to his first deep observation. You see, if consequentialism was just about valuing certain states of affairs more than others, you could call deontology a form of consequentialism that held that duty was the only intrinsically valuable thing. But that can’t be right, because deontology is clearly different from consequentialism. The distinction, that Williams suggests is that consequentialists discount the possibility of actions holding any inherent moral weight. For a consequentialist, an action is right because it brings about a better state of affairs. For non-consequentialists, a state of affairs can be better – even if it contains less total happiness or integrity or whatever they care about than a counterfactual state of affairs given a different action – because the right action was taken.
A deontologist would say that it is right for someone to do their duty in a way that ends up publically and spectacularly tragic, such that it turns a thousand people off of doing their own duty. A consequentialist who viewed duty as important for the general moral health of society – who, in Smart’s terminology, viewed acting from duty as good – would disagree.
Williams points out that this very emphasis on comparing states of affairs (so natural to me) is particularly consequentialist and utilitarian. That is to say, it is not particularly meaningful for a deontologist or a virtue ethicist to compare states of affairs. Deontologists have no duty to maximize the doing of duty; if you ask a deontologist to choose between a state of affairs that has one hundred people doing their duty and another that has a thousand, it’s not clear that either state is preferable from their point of view. Sure, deontologists think people should do their duty. But duty embodied in actions is the point, not some cosmic tally of duty.
Put as a moral statement, non-consequentialists lack any obligation to bring about more of what they see as morally desirable. A consequentialist may feel both fondness for and a moral imperative to bring about a universe where more people are happy. Non- consequentialists only have the fondness.
One deontologist of my acquaintance said that trying to maximize utility felt pointless – they viewed it as morally important as having a high score on a Tetris game. We ended up starting at each other in blank incomprehension.
In Williams’ view, rejection of consequentialism doesn’t necessarily lead to deontology, though. He sums it up simply as: “all that is involved… in the denial of consequentialism, is that with respect to some type of action, there are some situations in which that would be the right thing to do, even though the state of affairs produced by one’s doing that would be worse than some other state of affairs accessible to one.”
A deontologist will claim right actions must be taken no matter the consequences, but to be non-consequentalist, an ethical system merely has to claim that some actions are right despite a variety of more or less bad consequences that might arise from them.
Or, as I wrote angrily in the margins: “ok, so not necessarily deontology, justaccepting sub-maximal global utility“. It is hard to explain to a non-utilitarian just how much this bugs me, but I’m not going to go all rationalist and claim that I have a good reason for this belief.
Williams then turns his attention to the ways in which he thinks utilitarianism’s insistency on quantifying and comparing everything is terrible. Williams believes that by refusing to categorically rule any action out (or worse, specifically trying to come up with situations in which we might do horrific things), utilitarianism encourages people – even non-utilitarians who bump into utilitarian thought experiments – to think of things in utilitarian (that is to say, explicitly comparative) terms. It seems like Williams would prefer there to be actions that are clearly ruled out, not just less likely to be justified.
I get the impression of a man almost tearing out his hair because for him, there exist actions that are wrong under all circumstances and here we are, talking about circumstances in which we’d do them. There’s a kernel of truth here too. I think there can be a sort of bravado in accepting utilitarian conclusions. Yeah, I’m tough enough that I’d kill one to save one thousand? You wouldn’t? I guess you’re just soft and old-fashioned. For someone who cares as much about virtue as I think Williams does, this must be abhorrent.
I loved how Williams summed this up.
The demand… to think the unthinkable is not an unquestionable demand of rationality, set against a cowardly or inert refusal to follow out one’s moral thoughts. Rationality he sees as a demand not merely on him, but on the situations in and about which he has to think; unless the environment reveals minimum sanity, it is insanity to carry the decorum of sanity into it.
For all that I enjoyed the phrasing, I don’t see how this changes anything; there is nothing at all sane about the current world. A life is worth something like $7 million to $9 million and yet can be saved for less than $5000. This planet contains some of the most wrenching poverty and lavish luxury imaginable, often in the very same cities. Where is the sanity? If Williams thinks sane situations are a reasonable precondition to sane action, then he should see no one on earth with a duty to act sanely.
The next topic Williams covers is responsibility. He starts by with a discussion of agent interchangeability in utilitarianism. Williams believes that utilitarianism merely requires someone do the right thing. This implies that to the utilitarian, there is no meaningful difference between me doing the utilitarian right action and you doing it, unless something about me doing it instead of you leads to a different outcome.
This utter lack of concern for who does what, as long as the right thing gets done doesn’t actually seem to absolve utilitarians of responsibility. Instead, it tends to increase it. Williams says that unlike adherents of many ethical systems, utilitarians have negative responsibilities; they are just as much responsible for the things they don’t do as they are for the things they do. If someone has to and no one else will, then you have to.
This doesn’t strike me as that unique to utilitarianism. I was raised Catholic and can attest that Catholics (who are supposed to follow a form of virtue ethics) have a notion of negative responsibility too. Every mass, as Catholics ask forgiveness before receiving the Eucharist they ask God for forgiveness for their sins, in thoughts and words, in what they have done and in what they have failed to do.
Leaving aside whether the concept of negative responsibility is uniquely utilitarian or not, Williams does see problems with it. Negative responsibility makes so much of what we do dependent on the people around us. You may wish to spend your time quietly growing vegetables, but be unable to do so because you have a particular skill – perhaps even one that you don’t really enjoy doing – that the world desperately needs. Or you may wish never to take a life, yet be confronted with a run-away trolley that can only be diverted from hitting five people by pulling the lever that makes it hit one.
This didn’t really make sense to me as a criticism until I learned that Williams deeply cares about people living authentic lives. In both the cases above, authenticity played no role in the utilitarian calculus. You must do things, perhaps things you find abhorrent, because other people have set up the world such that terrible outcomes would happen if you didn’t.
It seems that Williams might consider it a tragedy for someone feel compelled by their ethical system to do something that is inauthentic. I imagine he views this as about as much of a crying waste of human potential as I view the yearly deaths of 429,000 people due to malaria. For all my personal sympathy for him I am less than sympathetic to a view that gives these the same weight (or treats inauthenticity as the greater tragedy).
Radical authenticity requires us to ignore society. Yes, utilitarianism plops us in the middle of a web of dependencies and a buffeting sea of choices that were not ours, while demanding we make the best out of it all. But our moral philosophies surely are among the things that push us towards an authentic life. Would Williams view it as any worse that someone was pulled from her authentic way of living because she would starve otherwise?
To me, there is a certain authenticity in following your ethical system wherever it leads. I find this authenticity beautiful, but not worthy of moral consideration, except insofar as it affects happiness. Williams finds this authenticity deeply important. But by rejecting consequentialism, he has no real way to argue for more of the qualities he desires, except perhaps as a matter of aesthetics.
It seems incredibly counter-productive to me to say to people – people in the midst of a society that relentlessly pulls them away from authenticity with impersonal market forces – that they should turn away from the one ethical system that seems to have as the desired outcome a happier system. A Kantian has her duty to duty, but as long as she does that, she cares not for the system. A virtue ethicist wishes to be virtuous and authentic, but outside of her little bubble of virtue, the terrors go on unabated. It’s only the utilitarian who can holds a better society as an end into itself.
Maybe this is just me failing to grasp non-utilitarian epistemologies. It baffles me to hear “this thing is good and morally important, but it’s not like we think it’s morally important for there to be more of it; that goes too far!”. Is this a strawman? If someone could explain what Williams is getting at here in terms I can understand, I’d be most grateful.
I do think Williams misses one key thing when discussing the utilitarian response to negative responsibility. Actions should be assessed on the margin, not in isolation. That is to say, the marginal effect of someone becoming a doctor, or undertaking some other career generally considered benevolent is quite low if there are others also willing to do the job. A doctor might personally save hundreds, or even thousands of lives over her career, but her marginal impact will be saving something like 25 lives.
The reasons for this are manifold. First, when there are few doctors, they tend to concentrate on the most immediately life-threatening problems. As you add more and more doctors, they can help, but after a certain point, the supply of doctors will outstrip the demand for urgent life-saving attention. They can certainly help with other tasks, but they will each save fewer lives than the first few doctors.
Second, there is a somewhat fixed supply of doctors. Despite many, many people wishing they could be doctors, only so many can get spots in medical school. Even assuming that medical school admissions departments are perfectly competent at assessing future skill at being a doctor (and no one really believes they are), your decision to attend medical school (and your successful admission) doesn’t result in one extra doctor. It simply means that you were slightly better than the next best person (who would have been admitted if you weren’t).
Finally, when you become a doctor you don’t replace one of the worst already practising doctors. Instead, you replace a retiring doctor who is (for statistical purposes) about average for her cohort.
All of this is to say that utilitarians should judge actions on the margin, not in absolute terms. It isn’t that bad (from a utilitarian perspective) not devote all your attentions to the most effective direct work, because unless a certain project is very constrained by the number of people working on it, you shouldn’t expect to make much marginal difference. On the other hand, earning a lot of money and giving it to highly effective charities (or even a more modest commitment, like donating 10% of your income) is likely to do a huge amount of good, because most people don’t do this, so you’re replacing a person at a high paying job who was doing (from a utilitarian perspective) very little good.
Williams either isn’t familiar with this concept, or omitted it in the interest of time or space.
Williams next topic is remoter effects. A remoter effect is any effect that your actions have on the decision making of other people. For example, if you’re a politician and you lie horribly, are caught, and get re-elected by a large margin, a possible remoter effect is other politicians lying more often. With the concept of remoter effects, Williams is pointing at what I call second order utilitarianism.
Williams makes a valid point that many of the justifications from remoter effects that utilitarians make are very weak. For example, despite what some utilitarians claim, telling a white lie (or even telling any lie that is unpublicized) doesn’t meaningfully reduce the propensity of everyone in the world to tell the truth.
Williams thinks that many utilitarians get away with claiming remoter effects as justification because they tend to be used as way to make utilitarianism give the common, respectable answers to ethical dilemmas. He thinks people would be much more skeptical of remoter effects if they were ever used to argue for positions that are uncommonly held.
This point about remoter effects was, I think, a necessary precursor to Williams’ next thought experiment. He asks us to imagine a society with two groups, A and B. There are many more members of A than B. Furthermore, members of A are disgusted by the presence (or even the thought of the presence) of members of group B. In this scenario, there has to exist some level of disgust and some ratio between A and B that makes the clear utilitarian best option relocating all members of group B to a different country.
With Williams’ recent reminder that most remoter effects are weaker than we like to think still ringing in my ears, I felt fairly trapped by this dilemma. There are clear remoter effects here: you may lose the ability to advocate against this sort of ethnic cleansing in other countries. Successful, minimally condemned ethnic cleansing could even encourage copy-cats. In the real world, these are might both be valid rejoinders, but for the purposes of this thought experiment, it’s clear these could be nullified (e.g. if we assume few other societies like this one and a large direct utility gain).
The only way out that Williams sees fit to offer us is an obvious trap. What if we claimed that the feelings of group A were entirely irrational and that they should just learn to live with them? Then we wouldn’t be stuck advocating for what is essentially ethnic cleansing. But humans are not rational actors. If we were to ignore all such irrational feelings, then utilitarianism would no longer be a pragmatic ethical system that interacts with the world as it is. Instead, it would involve us interacting with the world as we wish it to be.
Furthermore, it is always a dangerous game to discount other people’s feelings as irrational. The problem with the word irrational (in the vernacular, not utilitarian sense) is that no one really agrees on what is irrational. I have an intuitive sense of what is obviously irrational. But so, alas, do you. These senses may align in some regions (e.g. we both may view it as irrational to be angry because of a belief that the government is controlled by alien lizard-people), but not necessarily in others. For example, you may view my atheism as deeply irrational. I obviously do not.
Williams continues this critique to point out that much of the discomfort that comes from considering – or actually doing – things the utilitarian way comes from our moral intuitions. While Smart and I are content to discount these feelings, Williams is horrified at the thought. To view discomfort from moral intuitions as something outside yourself, as an unpleasant and irrational emotion to be avoided, is – to Williams – akin to losing all sense of moral identity.
This strikes me as more of a problem for rationalist philosophers. If you believe that morality can be rationally determined via the correct application of pure reason, then moral intuitions must be key to that task. From a materialist point of view though, moral intuitions are evolutionary baggage, not signifiers of something deeper.
Still, Williams made me realize that this left me vulnerable to the question “what is the purpose of having morality at all if you discount the feelings that engender morality in most people?”, a question to which I’m at a loss to answer well. All I can say (tautologically) is “it would be bad if there was no morality”; I like morality and want it to keep existing, but I can’t ground it in pure reason or empiricism; no stone tablets have come from the world. Religions are replete with stone tablets and justifications for morality, but they come with metaphysical baggage that I don’t particularly want to carry. Besides, if there was a hell, utilitarians would have to destroy it.
I honestly feel like a lot of my disagreement with Williams comes from our differing positions on the intuitive/systematizing axis. Williams has an intuitive, fluid, and difficult to articulate sense of ethics that isn’t necessarily transferable or even explainable. I have a system that seems workable and like it will lead to better outcomes. But it’s a system and it does have weird, unintuitive corner cases.
Williams talks about how integrity is a key moral stance (I think motivated by his insistence on authenticity). I agree with him as to the instrumental utility of integrity (people won’t want to work with you or help you if you’re an ass or unreliable). But I can’t ascribe integrity some sort of quasi-metaphysical importance or treat it as a terminal value in itself.
In the section on integrity, Williams comes back to negative responsibility. I do really respect Williams’ ability to pepper his work with interesting philosophical observations. When talking about negative responsibility, he mentions that most moral systems acknowledge some difference between allowing an action to happen and causing it yourself.
Williams believes the moral difference between action and inaction is conceptually important, “but it is unclear, both in itself and in its moral applications, and the unclarities are of a kind which precisely cause it to give way when, in very difficult cases, weight has to be put on it”. I am jealous three times over at this line, first at the crystal-clear metaphor, second at the broadly applicable thought underlying the metaphor, and third at the precision of language with which Williams pulls it off.
(I found Williams a less consistent writer than Smart. Smart wrote his entire essay in a tone of affable explanation and managed to inject a shocking amount of simplicity into a complicated subject. Williams frequently confused me – which I feel comfortable blaming at least in part on our vastly different axioms – but he was capable of shockingly resonant turns of phrase.)
I doubt Williams would be comfortable to come down either way on inaction’s equivalence to action. To the great humanist, it must ultimately (I assume) come down to the individual humans and what they authentically believed. Williams here is scoffing at the very idea of trying to systematize this most slippery of distinctions.
For utilitarians, the absence or presence of a distinction is key to figuring out what they must do. Utilitarianism can imply “a boundless obligation… to improve the world”. How a utilitarian undertakes this general project (of utility maximization) will be a function of how she can affect the world, but it cannot, to Williams, ever be the only project anyone undertakes. If it were the only project, underlain by no other projects, then it will, in Williams words, be “vacuous”.
The utilitarian can argue that her general project will not be the only project, because most people aren’t utilitarian and therefore have their own projects going on. Of course, this only gets us so far. Does this imply that the utilitarian should not seek to convince too many others of her philosophy?
What does it even mean for the general utilitarian project to be vacuous? As best I can tell, what Williams means is that if everyone were utilitarian, we’d all care about maximally increasing the utility of the world, but either be clueless where to start or else constantly tripping over each other (imagine, if you can, millions of people going to sub-Saharan Africa to distribute bed nets, all at the same time). The first order projects that Williams believes must underlay a more general project are things like spending times with friends, or making your family happy. Williams also believes that it might be very difficult for anyone to be happy without some of these more personal projects
I would suggest that what each utilitarian should do is what they are best suited for. But I’m not sure if this is coherent without some coordinating body (i.e. a god) ensuring that people are well distributed for all of the projects that need doing. I can also suppose that most people can’t go that far on willpower. That is to say, there are few people who are actually psychologically capable of working to improve the world in a way they don’t enjoy. I’m not sure I have the best answer here, but my current internal justification leans much more on the second answer than the first.
Which is another way of saying that I agree with Williams; I think utilitarianism would be self-defeating if it suggested that the only project anyone should undertake is improving the world generally. I think a salient difference between us is that he seems to think utilitarianism might imply that people should only work on improving the world generally, whereas I do not.
This discussion of projects leads to Williams talking about the hedonic paradox (the observation that you cannot become happy by seeking out pleasures), although Williams doesn’t reference it by name. Here Williams comes dangerously close to a very toxic interpretation of the hedonic paradox.
Williams believes that happiness comes from a variety of projects, not all of which are undertaken for the good of others or even because they’re particularly fun. He points out that few of these projects, if any, are the direct pursuit of happiness and that happiness seems to involve something beyond seeking it. This is all conceptually well and good, but I think it makes happiness seem too mysterious.
I wasted years of my life believing that the hedonic paradox meant that I couldn’t find happiness directly. I thought if I did the things I was supposed to do, even if they made me miserable, I’d find happiness eventually. Whenever I thought of rearranging my life to put my happiness first, I was reminded of the hedonic paradox and desisted. That was all bullshit. You can figure out what activities make you happy and do more of those and be happier.
There is a wide gulf between the hedonic paradox as originally framed (which is purely an observation about pleasures of the flesh) and the hedonic paradox as sometimes used by philosophers (which treats happiness as inherently fleeting and mysterious). I’ve seen plenty of evidence for the first, but absolutely none for the second. With his critique here, I think Williams is arguably shading into the second definition.
This has important implications for the utilitarian. We can agree that for many people, the way to most increase their happiness isn’t to get them blissed out on food, sex, and drugs, without this implying that we will have no opportunities to improve the general happiness. First, we can increase happiness by attacking the sources of misery. Second, we can set up robust institutions that are conducive to happiness. A utilitarian urban planner would perhaps give just as much thought to ensuring there are places where communities can meet and form as she would to ensuring that no one would be forced to live in squalor.
Here’s where Williams gets twisty though. He wanted us to come to the conclusion that a variety of personal projects are necessary for happiness so that he could remind us that utilitarianism’s concept of negative responsibility puts great pressure on an agent not to have her own personal projects beyond the maximization of global happiness. The argument here seems to be (not for the first time) that utilitarianism is self-defeating because it will make everyone miserable if everyone is a utilitarian.
Smart tried to short-circuit arguments like this by pointing out that he wasn’t attempting to “prove” anything about the superiority of utilitarianism, simply presenting it as an ethical system that might be more attractive if it was better understood. Faced with Williams point here, I believe that Smart would say that he doesn’t expect everyone to become utilitarian and that those who do become utilitarian (and stay utilitarian) are those most likely to have important personal projects that are generally beneficent.
I have the pleasure of reading the blogs and Facebook posts of many prominent (for certain unusual values of prominent) utilitarians. They all seem to be enjoying what they do. These are people who enjoy research, or organizing, or presenting, or thought experiments and have found ways to put these vocations to use in the general utilitarian project. Or people who find that they get along well with utilitarians and therefore steer their career to be surrounded by them. This is basically finding ikigai within the context of utilitarian responsibilities.
Saying that utilitarianism will never be popular outside of those suited for it means accepting we don’t have a universal ethical solution. This is, I think, very pragmatic. It also doesn’t rule out utilitarians looking for ways we can encourage people to be more utilitarian. To slightly modify a phrase that utilitarian animal rights activists use: the best utilitarianism is the type you can stick with; it’s better to be utilitarian 95% of the time then it is to be utilitarian 100% of the time – until you get burnt out and give it up forever.
I would also like to add a criticism of Williams’ complaint that utilitarian actions are overly determined by the actions of others. Namely, the status quo certainly isn’t perfect. If we are to reject action because it is not on the projects we would most like to be doing, then we are tacitly endorsing the status quo. Moral decisions cannot be made in a vacuum and the terrain in which we must make moral decisions today is one marked by horrendous suffering, inequality, and unfairness.
The next two sections of Williams’ essay were the most difficult to parse, but also the most rewarding. They deal with the interplay between calculating utilities and utilitarianism and question the extent to which utilitarianism is practical outside of appealing to the idea of total utility. That is to say, they ask if the unique utilitarian ethical frame can, under practical conditions have practical effects.
To get to the meat of Williams points, I had to wade through what at times felt like word games. All of the things he builds up to throughout these lengthy sections begin with a premise made up of two points that Williams thinks are implied by Smart’s essay.
All utilities should be assessed in terms of acts. If we’re talking about rules, governments, or dispositions, their utility stems from the acts they either engender or prevent.
To say that a rule (as an example) has any effect at all, we must say that it results in some change in acts. In Williams’ words: “the total utility effect of a rule’s obtaining must be cashable in terms of the effects of acts.
Together, (1) and (2) make up what Williams calls the “act-adequacy” premise. If the premise is true, there must be no surplus source of utility outside of acts and, as Smart said, rule utilitarianism should (if it is truly concerned with optimific outcomes) collapse to act utilitarianism. This is all well and good when comparing systems as tools of total assessment (e.g. when we take the universe wide view that I criticized Smart for hiding in), but Williams is first interested in how this causes rule and act utilitarianism to relate with actions
If you asked an act-utilitarian and a rule utilitarian “what makes that action right”, they would give different answers. The act utilitarian would say that it is right if it maximizes utility, but the rule utilitarian would say it is right if it is in accordance with rules that tend to maximize utility. Interestingly, if the act-adequacy premise is true, then both act and rule utilitarians would agree as to why certain rules or dispositions are desirable, namely, that actions that results from those rules or dispositions tends to maximize utility.
(Williams also points out that rules, especially formal rules, may derive utility from sources other than just actions following the rule. Other sources of utility include: explaining the rule, thinking about the rule, avoiding the rule, or even breaking the rule.)
But what to do we do when actually faced with the actions that follow from a rule or disposition? Smart has already pointed out that we should praise or blame based on the utility of the praise/blame, not on the rightness or wrongness of the action we might be praising.
In Williams’ view, there are two problems with this. First, it is not a very open system. If you knew someone was praising or blaming you out of a desire to manipulate your future actions and not in direct relation to their actual opinion of your past actions, you might be less likely to accept that praise or blame. Therefore, it could very well be necessary for the utilitarian to hide why acts are being called good or bad (and therefore the reasons why they praise or blame).
The second problem is how this suggests utilitarians should stand with themselves. Williams acknowledges that utilitarians in general try not to cry over spilt milk (“[this] carries the characteristically utilitarian thought that anything you might want to cry over is, like milk, replaceable”), but argues that utilitarianism replaces the question of “did I do the right thing?” with “what is the right thing to do?” in a way that may not be conducive to virtuous thought.
(Would a utilitarian Judas have lived to old age contentedly, happy that he had played a role in humankind’s eternal salvation?)
The answer to “what is the right thing to do?” is of course (to the utilitarian) “that which has the best consequences”. Except “what is the right thing to do?” isn’t actually the right question to ask if you’re truly concerned with the best consequences. In that case, the question is “if asking this question is the right thing to do, what actions have the best consequences?”
Remember, Smart tried to claim that utilitarianism was to only be used for deliberative actions. But it is unclear which actions are the right ones to take as deliberative, especially a priori. Sometimes you will waste time deliberating, time that in the optimal case you would have spent on good works. Other times, you will jump into acting and do the wrong thing.
The difference between act (direct) and rule (indirect) utilitarianism therefore comes to a question of motivation vs. justification. Can a direct utilitarian use “the greatest total good” as a motivation if they do not know if even asking the question “what will lead to the greatest total good?” will lead to it? Can it only ever be a justification? The indirect utilitarian can be motivated by following a rule and justify her actions by claiming that generally followed, the rule leads to the greatest good, but it is unclear what recourse (to any direct motivation for a specific action) the direct utilitarian has.
Essentially, adopting act utilitarianism requires you to accept that because you have accepted act utilitarianism you will sometimes do the wrong thing. It might be that you think that you have a fairly good rule of thumb for deliberating, such that this is still the best of your options to take (and that would be my defense), but there is something deeply unsettling and somewhat paradoxical about this consequence.
Williams makes it clear that the bad outcomes here aren’t just loss of an agent’s time. This is similar in principle to how we calculate the total utility of promulgating a rule. We accept that the total effects of the promulgation must include the utility or disutility that stems from avoiding it or breaking it, in addition to the utility or disutility of following. When looking at the costs of deliberation, we should also include the disutility that will sometimes come when we act deliberately in a way that is less optimific than we would have acted had we spontaneously acted in accordance with our disposition or moral intuitions.
This is all in the case where the act-adequacy premise is true. If it isn’t, the situation is more complex. What if some important utility of actions comes from the mood they’re done in, or in them being done spontaneously? Moods may be engineered, but it is exceedingly hard to engineer spontaneity. If the act-adequacy premise is false, then it may not hold that the (utilitarian) best world is one in which right acts are maximized. In the absence of the act-adequacy premise it is possible (although not necessarily likely) that the maximally happy world is one in which few people are motivated by utilitarian concerns.
Even if the act-adequacy premise holds, we may be unable to know if our actions are at all right or wrong (again complicating the question of motivation).
Williams presents a thought experiment to demonstrate this point. Imagine a utilitarian society that noticed its younger members were liable to stray from the path of utilitarianism. This society might set up a Truman Show-esque “reservation” of non-utilitarians, with the worst consequences of their non-utilitarian morality broadcasted for all to see. The youth wouldn’t stray and the utility of the society would be increased (for now, let’s beg the question of utilitarianism as a lived philosophy being optimific).
Here, the actions of the non-utilitarian holdouts would be right; on this both utilitarians (looking from a far enough remove) and the subjects themselves would agree. But this whole thing only works if the viewers think (incorrectly) that the actions they are seeing are wrong.
From the global utilitarian perspective, it might even be wrong for any of the holdouts to become utilitarian (even if utilitarianism was generally the best ethical system). If the number of viewers is large enough and the effect of one fewer irrational holdout is strong enough (this is a thought experiment, so we can fiddle around with the numbers such that this is indeed true), the conversion of a hold-out to utilitarianism would be really bad.
Basically, it seems possible for there to be a large difference between the correct action as chosen by the individual utilitarian with all the knowledge she has and the correct action as chosen from the perspective of an omniscient observer. From the “total assessment” perspective, it is even possible that it would be best that there be no utilitarians.
Williams points out that many of the qualities we value and derive happiness from (stubborn grit, loyalty, bravery, honour) are not well aligned with utilitarianism. When we talked about ethnic cleansing earlier, we acknowledged that utilitarianism cannot distinguish between preferences people have and the preferences people should have; both are equally valid. With all that said, there’s a risk of resolving the tension between non-utilitarian preferences and the joy these preferences can bring people by trying to shape the world not towards maximum happiness, but towards the happiness easiest to measure and most comfortable to utilitarians.
Utilitarianism could also lead to disutility because of the game theoretic consequences. On international projects or projects between large groups of people, sanctioning other actors must always be an option. Without sanctioning, the risk of defection is simply too high in many practical cases. But utilitarians are uniquely compelled to sanction (or else surrender).
If there is another group acting in an uncooperative or anti-utilitarian matter, the utilitarians must apply the least terrible sanction that will still be effective (as the utility of those they’re sanctioning still matters). The other group will of course know this and have every incentive to commit to making any conflict arising from the sanction so terrible as to make any sanctioning wrong from a utilitarian point of view. Utilitarians now must call the bluff (and risk horrible escalating conflict), or else abandon the endeavour.
This is in essence a prisoner’s dilemma. If the non-utilitarians carry on without being sanctioned, or if they change their behaviour in response to sanctions without escalation, everyone will be better off (then in the alternative). But if utilitarians call the bluff and find it was not a bluff, then the results could be catastrophic.
Williams seems to believe that utilitarians will never include an adequate fudge factor for the dangers of mutual defecting. He doesn’t suggest pacifism as an alternative, but he does believe that violent sanctioning should always be used at a threshold far beyond where he assesses the simple utilitarian one to lie.
This position might be more of a historical one, in reaction to the efficiency, order, and domination obsessed Soviet Communism (and its Western fellow travelers), who tended towards utilitarian justifications. All of the utilitarians I know are committed classical liberals (indeed, it sometimes seems to me that only utilitarians are classic liberals these days). It’s unclear if Williams’ criticism can be meaningfully applied to utilitarians who have internalized the severe detriments of escalating violence.
While it seems possible to produce a thought experiment where even such committed second order utilitarians would use the wrong amount of violence or sanction too early, this seems unlikely to come up in a practical context – especially considering that many of the groups most keen on using violence early and often these days aren’t in fact utilitarian. Instead it’s members of both the extreme left and right, who have independently – in an amusing case of horseshoe theory – adopted a morality based around defending their tribe at all costs. This sort of highly local morality is anathema to utilitarians.
Williams didn’t anticipate this shift. I can’t see why he shouldn’t have. Utilitarians are ever pragmatic and (should) understand that utilitarianism isn’t served by starting horrendous wars willy-nilly.
Then again, perhaps this is another harbinger of what Williams calls “utilitarianism ushering itself from the scene”. He believes that the practical problems of utilitarian ethics (from the perspective of an agent) will move utilitarianism more and more towards a system of total assessment. Here utilitarianism may demand certain things in the way of dispositions or virtues and certainly it will ask that the utility of the world be ever increased, but it will lose its distinctive character as a system that suggests actions be chosen in such a way as to maximize utility.
Williams calls this the transcendental viewpoint and pithily asks “if… utilitarianism has to vanish from making any distinctive mark in the world, being left only with the total assessment from the transcendental standpoint – then I leave if for discussion whether that shows that utilitarianism is unacceptable or merely that no one ought to accept it.”
This, I think, ignores the possibility that it might become easier in the future to calculate the utility of certain actions. The results of actions are inherently chaotic and difficult to judge, but then, so is the weather. Weather prediction has been made tractable by the application of vast computational power. Why not morality? Certainly, this can’t be impossible to envision. Iain M. Banks wrote a whole series of books about it!
Of course, if we wish to be utilitarian on a societal level, we must currently do so without the support of godlike AI. Which is what utilitarianism was invented for in the first place. Here it was attractive because it is minimally committed – it has no elaborate theological or philosophical commitments buttressing it, unlike contemporaneous systems (like Lockean natural law). There is something intuitive about the suggestion that a government should only be concerned for the welfare of the governed.
Sure, utilitarianism makes no demands on secondary principles, Williams writes, but it is extraordinarily demanding when it comes to empirical information. Utilitarianism requires clear, comprehensible, and non-cyclic preferences. For any glib rejoinders about mere implementation details, Williams has this to say:
[These problems are] seen in the light of a technical or practical difficulty and utilitarianism appeals to a frame of mind in which technical difficulty, even insuperable technical difficulty, is preferable to moral unclarity, no doubt because it is less alarming.
Williams suggests that the simplicity of utilitarianism isn’t a virtue, only indicative of “how little of the world’s luggage it is prepared to pick up”. By being immune to concerns of justice or fairness (except insofar as they are instrumentally useful to utilitarian ends), Williams believes that utilitarianism fails at many of the tasks that people desire from a government.
Personally, I’m not so sure a government commitment to fairness or justice is at all illuminating. There are currently at least two competing (and mutually exclusive) definitions of both fairness and justice in political discourse.
Should fairness be about giving everyone the same things? Or should it be about giving everyone the tools they need to have the same shot at meaningful (of course noting that meaningful is a societal construct) outcomes? Should justice mean taking into account mitigating factors and aiming for reconciliation? Or should it mean doing whatever is necessary to make recompense to the victim?
It is too easy to use fairness or justice as a sword without stopping to assess who it aimed at and what the consequences of the aim is (says the committed consequentialist). Fairness and justice are meaty topics that deserve better than to be thrown around as a platitudinous counterargument to utilitarianism.
A much better critique of utilitarian government can be made by imagining how such a government would respond to non-utilitarian concerns. Would it ignore them? Or would it seek to direct its citizens to have only non-utilitarian concerns? The latter idea seems practically impossible. The first raises important questions.
Imagine a government that is minimally responsive to non-utilitarian concerns. It primarily concerns itself with maximizing utility, but accepts the occasional non-utilitarian decision as the cost it must pay to remain in power (presume that the opposition is not utilitarian and would be very responsive to non-utilitarian concerns in a way that would reduce the global utility). This government must necessarily look very different to the utilitarian elite who understand what is going on and the masses who might be quite upset that the government feels obligated to ignore many of their dearly held concerns.
Could such an arrangement exist with a free media? With free elections? Democracies are notably less corrupt than autocracies, so there are significant advantages to having free elections and free media. But how, if those exist, does the utilitarian government propose to keep its secrets hidden from the population? And if the government was successful, how could it respect its citizens, so duped?
In addition to all that, there is the problem of calculating how to satisfy people’s preferences. Williams identifies three problems here:
How do you measure individual welfare?
To what extent is welfare comparative?
How do you develop the aggregate social preference given the answer to the proceeding two questions?
Williams seems to suggest that a naïve utilitarian approach involves what I’ve think is best summed up in a sick parody of Marx: from each according to how little they’ll miss it, to each according to how much they desire it. Surely there cannot be a worse incentive structure imaginable than the one naïve utilitarianism suggests?
When dealing with preferences, it is also the case that utilitarianism makes no distinction between fixing inequitable distributions that cause discontent or – as observed in America – convincing those affected by inequitable distributions not to feel discontent.
More problems arise around substitution or compensation. It may be more optimific for a roadway to be built one way than another and it may be more optimific for compensation to be offered to those who are affected, but it is unclear that the compensation will be at all worth it for those affected (to claim it would be, Williams declares, is “simply an extension of the dogma that every man has his price”). This is certainly hard for me to think about, even (or perhaps especially) because the common utilitarian response is a shrug – global utility must be maximized, after all.
Utilitarianism is about trade-offs. And some people have views which they hold to be beyond all trade-off. It is even possible for happiness to be buttressed or rest entirely upon principles – principles that when dearly and truly held cannot be traded-off against. Certainly, utilitarians can attempt to work around this – if such people are a minority, they will be happily trammelled by a utilitarian majority. But it is unclear what a utilitarian government could do in a such a case where the majority of their population is “afflicted” with deeply held non-utilitarian principles.
Williams sums this up as:
Perhaps humanity is not yet domesticated enough to confine itself to preferences which utilitarianism can handle without contradiction. If so, perhaps utilitarianism should lope off from an unprepared mankind to deal with problems it finds more tractable – such as that present by Smart… of a world which consists only of a solitary deluded sadist.
Finally, there’s the problem of people being terrible judges of what they want, or simply not understanding the effects of their preferences (as the Americas who rely on the ACA but want Obamacare to be repealed may find out). It is certainly hard to walk the line between respecting preferences people would have if they were better informed or truly understood the consequences of their desires and the common (leftist?) fallacy of assuming that everyone who held all of the information you have must necessarily have the same beliefs as you.
All of this combines to make Williams view utilitarianism as dangerously irresponsible as a system of public decision making. It assumes that preferences exist, that the method of collecting them doesn’t fail to capture meaningful preferences, that these preferences would be vindicated if implemented, and that there’s a way to trade-off among all preferences.
To the potential utilitarian rejoinder that half a loaf is better than none, he points out a partial version of utilitarianism is very vulnerable to the streetlight effect. It might be used where it can and therefore act to legitimize – as “real”– concerns in the areas where it can be used and delegitimize those where it is unsuitable. This can easily lead to the McNamara fallacy; deliberate ignorance of everything that cannot be quantified:
The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.
— Daniel Yankelovich “Corporate Priorities: A continuing study of the new demands on business.” (1972)
This isn’t even to mention something that every serious student of economics knows: that when dealing with complicated, idealized systems, it is not necessarily the non-ideal system that is closest to the ideal (out of all possible non-ideal systems) that has the most benefits of the ideal. Economists call this the “theory of the second best”. Perhaps ethicists might call it “common sense” when applied to their domain?
Williams ultimately doubts that systematic though is at all capable of dealing with the myriad complexities of political (and moral) life. He describes utilitarianism as “having too few thoughts and feelings to match the world as it really is”.
I disagree. Utilitarianism is hard, certainly. We do not agree on what happiness is, or how to determine which actions will most likely bring it, fine. Much of this comes from our messy inbuilt intuitions, intuitions that are not suited for the world as it now is. If utilitarianism is simple minded, surely every other moral system (or lack of system) must be as well.
In many ways, Williams did shake my faith in utilitarianism – making this an effective and worthwhile essay. He taught me to be fearful of eliminating from consideration all joys but those that the utilitarian can track. He drove me to question how one can advocate for any ethical system at all, denied the twin crutches of rationalism and theology. And he further shook my faith in individuals being able to do most aspects of the utilitarian moral calculus. I think I’ll have more to say on that last point in the future.
But by their actions you shall know the righteous. Utilitarians are currently at the forefront of global poverty reduction, disease eradication, animal suffering alleviation, and existential risk mitigation. What complexities of the world has every other ethical system missed to leave these critical tasks largely to utilitarians?
Williams gave me no answer to this. For all his beliefs that utilitarianism will have dire consequences when implemented, he has no proof to hand. And ultimately, consequences are what you need to convince a consequentialist.
Utilitarianism for and against is an interesting little book. It’s comprised of back-to-back ~70 page essays, one in favour of utilitarianism and one opposed. As an overview, it’s hard to beat something like this. You don’t have to rely on one scholar to give you her (ostensibly fair and balanced) opinion; you get two articulate philosophers arguing their side as best they can. Fair and balanced is by necessity left as an exercise to the reader (honestly, it always is; here at least it’s explicit).
I’m going to cover the “for” side first. The “against” side will be in later blog post. Both reviews are going to assume that you have some understanding of utilitarianism. If you don’t, go read my primer. Or be prepared to Google. I should also mention that I have no aspirations of being balanced myself. I’m a utilitarian; I had much more to disagree with on the “against” side than on the “for” side.
Professor J.J.C Smart makes the arguments in favour of utilitarianism. According to his Wikipedia entry, he was known for “outsmarting” his opponents, that is to say, accepting the conclusions of their reductio ad absurdum arguments with nary a shrug. He was, I’ve gathered, not one for moral intuitions. His criticism of rule utilitarianism played a role in its decline and he was influential in raising the next crop of Australian utilitarians, among whom Peter Singer is counted. As near as I can tell, he was one of the more notable defenders of utilitarianism when this volume was published in 1971 (although much of his essay dates back a decade earlier).
Smart is emphatically not a rationalist (in the philosophical sense); he writes no “proof of utilitarianism” and denies that such a proof is even possible. Instead, Smart restricts himself to explaining how utilitarianism is an attractive ethical system for anyone possessed of general benevolence. Well, I’ll say “everyone”. The authors of this volume seem to be labouring under the delusion that only men have ethical dilemmas or the need for ethical systems. Neither one of them manages the ethicist’s coup of realizing that women might be viewed as full people at the remove of half a century from their time of writing (such a coup would perhaps have been strong evidence of the superiority of one philosophy over another).
A lot of Smart’s essay consists of showing how various different types of utilitarianism are all the same under the hood. I’ve termed these “collapses”, although “isomorphisms” might be a better term. There are six collapses in all.
The very first collapse put me to mind of the famous adage about ducks. If it walks like a duck, swims like a duck, and quacks like a duck, it is a duck. By the same token, if someone acts exactly how a utilitarian in their position and with their information would act, then it doesn’t matter if they are a utilitarian or not. From the point of view of an ethical system that cares only about consequences they may as well be.
The next collapse deals with rule utilitarianism and may have a lot to do with its philosophical collapse. Smart points out that if you are avoiding “rule worship”, then you will face a quandary when you could break a rule in such a way as to gain more utility. Rule utilitarians sometimes claim that you just need rules with lots of exceptions and special cases. Smart points out that if you carry this through to its logical conclusion, you really are only left with one rule, the meta-rule of “maximize expected utility”. In this way, rule utilitarianism collapses into act utilitarianism.
Next into the compactor is the difference between ideal and hedonic utilitarians. Briefly, ideal utilitarians hold that some states of mind are inherently valuable (in a utilitarian sense), even if they aren’t particularly pleasant from the inside. “Better Socrates dissatisfied than a fool satisfied” is the rallying cry of ideal utilitarians. Hedonic utilitarians have no terminal values beyond happiness; they would gladly let almost the entirety of the human race wirehead.
Smart claims that while these differences are philosophically large, they are practically much less meaningful. Here Smart introduces the idea of the fecundity of a pleasure. A doctor taking joy (or grim satisfaction) in saving a life is a much more fecund pleasure than a gambler’s excitement at a good throw, because it brings about greater joy once you take into account everyone around the actor. Many of the other pleasures (like writing or other intellectual pursuits) that ideal utilitarians value are similarly fecund. They either lead to abatement of suffering (the intellectual pursuits of scientists) or to many people’s pleasure (the labour of the poet). Taking into account fecundity, it was better for Smart to write this essay than to wirehead himself, because many other people – like me – get to enjoy his writing and have fun thinking over the thorny issues he raises.
Smart could have stood to examine at greater length just why ideal utilitarians value the things they do. I think there’s a decent case to be made that societies figure out ways to value certain (likely fecund) pleasures all on their own, no philosophers required. It is not, I think, that ideal utilitarians have stumbled onto certain higher pleasures that they should coax their societies into valuing. Instead, their societies have inculcated them with a set of valued activities, which, due to cultural evolution, happen to line up well with fecund pleasures. This is why it feels difficult to argue with the list of pleasures ideal utilitarians proffer; it’s not that they’ve stumbled onto deep philosophical truths via reason alone, it’s that we have the same inculcations they do.
Beyond simple fecundity though, there is the fact that the choice between Socrates dissatisfied and a fool satisfied rarely comes up. Smart has a great line about this:
But even the most avid television addict probably enjoys solving practical problems connected with his car, his furniture, or his garden. However unintellectual he might be, he would certainly resist the suggestion the he should, if it were possible, change places with a contented sheep, or even a happy and lively dog.
This boils down to: ‘ideal utilitarians assume they’re a lot better than everyone else, what with their “philosophical pursuits”, but most people don’t want purely mindless pleasures’. Combined, these ideas of fecundity and hidden depths, point to a vanishingly small gap between ideal and hedonistic utilitarians, especially compared to the gap between utilitarians and practitioners of other ethical systems.
After dealing with questions of how highly we should weigh some pleasures, Smart turns to address the idea of some pleasures not counting at all. Take, for example, the pleasure that a sadist takes in torturing a victim. Should we count this pleasure in our utilitarian moral calculus? Smart says yes, for reasons that again boil down to “certain pleasures being viewed as bad are an artifact of culture; no pleasure is intrinsically bad.”
(Note however that this isn’t the same thing as Smart condoning the torture. He would say that the torture is wrong because the pleasure the sadist gains from it cannot make up for the distress of the victim. Given that no one has ever found a real live utility monster, this seems a safe position to take.)
In service of this, Smart presents a thought experiment. Imagine a barren universe inhabited by a single sentient being. This sentient being wrongly believes that there are many other inhabitants of the universe being gruesomely tortured and takes great pleasure in this thought. Would the universe be better if the being didn’t derive pleasure from her misapprehension?
The answer here for both Smart and me is no (although I suspect many might disagree with us). Smart reasons (almost tautologically) that since there is no one for this being to hurt, her predilection for torture can’t hurt anyone. We are rightfully wary of people who unselfconsciously enjoy the thought of innocents being tortured because of what it says about what their hobbies might be. But if they cannot hurt anyone, their obsession is literally harmless. This bleak world would not be better served by its single sentient inhabitant quailing at the thought of the imaginary torture.
Of course, there’s a wide gap between the inhabitant curled up in a ball mourning the torture she wrongly believes to be ongoing and her simple ambivalence to it. It seems plausible that many people could consider her ambivalence preferable, even if they did not wish her to be sad. But imagine then the difference being between her lonely and bored and her satisfied and happy (leaving aside for a moment the torture). It is clear here which is the better universe. Given a way to move from the universe with a single bored being to the one with a single fulfilled being, shouldn’t we take it, given that the shift most literally harms no one?
This brings us to the distinction between intrinsically bad pleasures and extrinsically bad pleasures – the flip side of the intrinsically more valuable states of mind of the ideal utilitarian. Intrinsically bad pleasures are pleasures that for some rationalist or metaphysical reason are just wrong. Their rightness or wrongness must of course be vulnerable to attacks on the underlying logic or theology, but I can hardly embark on a survey of common objections to all the common underpinnings; I haven’t the time. But many people have undertaken those critiques and many will in the future, making a belief in intrinsically bad pleasures a most unstable place to stand.
Extrinsically bad pleasures seem like a much safer proposition (and much more convenient to the utilitarian who wishes to keep their ethical system free of meta-physical or meta-ethical baggage). To say that a pleasure is extrinsically bad is simply to say that to enjoy it causes so much misery that it will practically never be moral to experience it. Similar to how I described ideal utilitarian values as heavily culturally influenced, I can’t help but feel that seeing some pleasures as intrinsically bad has to be the result of some cultural conditioning.
If we can accept that certain pleasures are not intrinsically good or ill, but that many pleasures that are thought of as intrinsically good or ill are thought so because of long cultural experience – positive or negative – with the consequences of seeking them out, then we should see the position of utilitarians who believe that some pleasures cannot be counted in the plus column collapse to approximately the same as those who hold that they can, even if neither accepts the position of the other. The utilitarian who refuses to believe in intrinsically bad pleasures should still condemn most of the same actions as one who does, because she knows that these pleasures will be outweighed by the pains they inflict on others (like the pain of the torture victim overwhelming the joy of the torturer).
There is a further advantage to holding that pleasures cannot be intrinsically wrong. If we accept the post-modernists adage that knowledge is created culturally, we will remember to be skeptical of the universality of our knowledge. That is to say, if you hold a list of intrinsically bad pleasures, it will probably not be an exhaustive list and there may be pleasures whose ill-effects you overlook because you are culturally conditioned to overlook them. A more thoughtful utilitarian who doesn’t take the short-cut of deeming some pleasures intrinsically bad can catch these consequences and correctly advocate against these ultimately wrong actions.
The penultimate collapse is perhaps the least well supported by arguments. In a scant page, Smart addresses the differences between total and average happiness in a most unsatisfactory fashion. He asks which of two universes you might prefer: one with one million happy, healthy people, or one with twice as many people, equally happy and healthy. Both Smart and I feel drawn to the larger universe, but he has no arguments for people who prefer the smaller. Smart skips over the difficulties here with an airy statement of “often the best way to increase the average happiness is to increase the total happiness and vice versa”.
I’m not entirely sure this statement is true. How would one go about proving it?
Certainly, average happiness seems to miss out on the (to me) obvious good that you’d get if you could have twice as many happy people (which is clearly one case where they give different answers), but like Smart, I have trouble coming up with a persuasive argument why that is obviously good.
I do have one important thing myself to say about the difference between average and total happiness. When I imagine a world with more people who are on average less happy than the people that currently exist (but collectively experience a greater total happiness) I feel an internal flinch.
Unfortunately for my moral intuitions, I feel the exact same flinch when I image a world with many fewer people, who are on average transcendentally happy. We can fiddle with the math to make this scenario come out to have greater average and total happiness than the current world. Doesn’t matter. Exact same flinch.
This leads me to believe that my moral intuitions have a strong status quo bias. The presence of a status quo bias in itself isn’t an argument for either total or average utilitarianism, but it is a reminder to be intensely skeptical of our response to thought experiments that involve changing the status quo and even to be wary of the order that options are presented in.
The final collapse Smart introduces is that between regular utilitarians and negative utilitarians. Negative utilitarians believe that only suffering is morally relevant and that the most important moral actions are those that have the consequence of reducing suffering. Smart points out that you can raise both the total and average happiness of a population by reducing suffering and furthermore that there is widespread agreement on what reduces suffering. So Smart expects utilitarians of all kinds (including negative) to primarily focus on reducing suffering anyway. Basically, despite the profound philosophical differences between regular and negative utilitarians, we should expect them to behave equivalently. Which, by the very first collapse (if it walks like a duck…), shows that we can treat them as philosophical equivalents, at least in the present world.
In my experience, this is more or less true. Many of the negative utilitarians I am aware of mainly exercise their ethics by donating 10% of their income to GiveWell’s most effective charities. The regular utilitarians… do the exact same. Quack.
As far as I can tell, Smart goes to all this work to show how many forms of utilitarianism collapse together so that he can present a system that isn’t at war with itself. Being able to portray utilitarianism as a simple, unified system (despite the many ways of doing it) heads off many simple criticisms.
While I doubt many people avoided utilitarianism because there are lingering questions about total versus average happiness, per se, these little things add up. Saying “yes, there are a bunch of little implementation details that aren’t agreed upon” is a bad start to an ethical system, unless you can immediately follow it up with “but here’s fifty pages of why that doesn’t matter and you can just do what comes naturally to you (under the aegis of utilitarianism)”.
Let’s talk a bit about what comes naturally to people outside the context of different forms of utilitarianism. No one, not even Smart, sits down and does utilitarian calculus before making every little decision. For most tasks, we can ignore the ethical considerations (e.g. there is broad, although probably not universal agreement that there aren’t hidden moral dimensions to opening a door). For some others, our instincts are good enough. Should you thank the woman at the grocery store checkout? You probably will automatically, without pausing to consider if it will increase the total (or average) happiness of the world.
Like in the case of thanking random service industry workers, there are a variety of cases where we actually have pretty good rules of thumb. These rules of thumbs serve two purposes. First, they allow us to avoid spending all of our time contemplating if our actions are right or wrong, freeing us to actually act. Second, they protect us from doing bad things out of pettiness or venality. If you have a strong rule of thumb that violence is an inappropriate response to speech you disagree with, you’re less likely to talk yourself into punching an odious speaker in the face when confronted with them.
It’s obviously important to pick the right heuristics. You want to pick the ones that most often lead towards the right outcomes.
I say “heuristics” and “rules of thumbs” because the thing about utilitarians and rules is that they always have to be prepared to break them. Rules exist for the common cases. Utilitarians have to be on guard for the uncommon cases, the ones where breaking a rule leads to greater good overall. Having a “don’t cause people to die” rule is all well and good. But you need to be prepared to break it if you can only stop mass death from a runaway trolley by pushing an appropriately sized person in front of it.
Smart seems to think that utilitarianism only comes up for deliberative actions, where you take the time to think about them and that it shouldn’t necessarily cover your habits. This seems like an abrogation to me. Shouldn’t a clever utilitarian, realizing that she only uses utilitarianism for big decisions spend some time training her reflexes to more often give the correct utilitarian solution, while also training herself to be more careful of her rules of thumb and think ethically more often? Smart gave no indication that he thinks this is the case.
The discussion of rules gives Smart the opportunity to introduce a utilitarian vocabulary. An action is right if it is the one that maximizes expected happiness (crucially, this is a summation across many probabilities and isn’t necessarily the action that will maximize the chance of the happiest outcome) and wrong otherwise. An action is rational if a logical being in possession of all the information you possess would think you to be right if you did it. All other actions are irrational. A rule of thumb, disposition, or action is good if it tends to lead to the right outcomes and bad if it tends to lead to the wrong ones.
This vocabulary becomes important when Smart talks about praise, which he believes is an important utilitarian concern in its own right. Praise increases people’s propensity towards certain actions or dispositions, so Smart believes a utilitarian aught to consider if the world would be better served by more of the same before she praises anything. This leads to Smart suggesting that utilitarians should praise actions that are good or rational even if they aren’t right.
It also implies that utilitarians doing the right thing must be open to criticism if it requires bad actions. One example Smart gives is a utilitarian Frenchman cheating on wartime rationing in 1940s England. The Frenchman knows that the Brits are too patriotic to cheat, so his action (and the actions of the few others that cheat) will probably fall below the threshold for causing any real harm, while making him (and the other cheaters) happier. The calculus comes out positive and the Frenchman believes it to be the right action. Smart acknowledges that this logic is correct, but he points out that by the similar logic, the Frenchman should agree that he must be severely punished if caught, so as to discourage others from doing the same thing.
This actually reminds me of something Hannah Arendt brushed up against in Eichmann in Jerusalem while talking about how the moral constraints on people are different than the ones on states. She gives the example of Soghomon Tehlirian, the Armenian exile who assassinated one of the triumvirate of Turkish generals responsible for the Armenian genocide. Arendt believes that it would have been wrong for the Armenian government to assassinate the general (had one even existed at the time), but that it was right for a private citizen to do the deed, especially given that Tehlirian did not seek to hide his crimes or resist arrest.
From a utilitarian point of view, the argument would go something like this: political assassinations are bad, in that they tend to cause upheaval, chaos, and ultimately suffering. On the other hand, there are some leaders who the world would clearly be better off without, if not to stop their ill deeds in their tracks, then to strike fear and moderation into the hearts of similar leaders.
Were the government of any country to carry out these assassinations, it would undermine the government’s ability to police murder. But when a private individual does the deed and then immediately gives herself up into the waiting arms of justice, the utility of the world is increased. If she has erred in picking her target and no one finds the assassination justified, then she will be promptly punished, disincentivizing copy-cats. If instead, like Tehlirian, she is found not guilty, it will only be because the crimes committed by the leader she assassinated were so brutal and clear that no reasonable person could countenance them. This too sends a signal.
That said, I think Smart takes his distinctions between right and good a bit too far. He cautions against trying to change the non-utilitarian morality of anyone who already tends towards good actions, because this might fail half-way, weakening their morality without instilling a new one. Likewise, he is skeptical of any attempt to change the traditions of a society.
This feels too much like trying to have your cake and eat it too. Utilitarianism can be criticized because it is an evangelical ethical system that gives results far from moral intuitions in some cases. From a utilitarian point of view, it is fairly clearly good to have more utilitarians willing to hoover up these counter-intuitive sources of utility. If all you care about are the ends, you want more people to care about the best ends!
If the best way to achieve utilitarian ends wasn’t through utilitarianism, then we’re left with a self-defeating moral system. In trying to defend utilitarianism from the weak critique that it is pushy and evangelical, both in ways that are repugnant to all who engage in cultural or individual ethical relativism and in ways that are repugnant to some moral intuitions, Smart opens it up to the much stronger critique that it is incoherent!
Smart by turns seems to seek to rescue some commonly held moral truths when they conflict with utilitarianism while rejecting others that seem no less contradictory. I can hardly say that he seems keen to show utilitarianism is in fact in harmony with how people normally act – he clearly isn’t. But he also doesn’t always go all (or even part of) the way in choosing utilitarianism over moral intuitions
Near the end of the book, when talking about a thought experiment introduced by one McCloskey, Smart admits that the only utilitarian action is to frame and execute an innocent man, thereby preventing a riot. McCloskey anticipated him, saying: “But as far as I know, only J.J.C. Smart among the contemporary utilitarians is happy to adopt this ‘solution'”.
Here I must lodge a mild protest. McCloskey’s use of the work ‘happy’ surely makes me look a most reprehensible person. Even in my most utilitarian moods, I am not happy about this consequence of utilitarianism… since any injustice causes misery and so can be justified only as the lesser of two evils, the fewer the situation in which the utilitarian is forced to choose the lesser of two evils, the better he will be pleased.
This is also the man who said (much as I have) that “admittedly utilitarianism does have consequences which are incompatible with the common moral consciousness, but I tended to take the view ‘so much the worse for the common moral consciousness’.”
All this leaves me baffled. Why the strange mixture? Sometimes Smart goes far further than it seems any of his contemporaries would have. Other times, he stops short of what seems to me the truly utilitarian solution.
On the criticism that utilitarianism compels us always in moral action, leaving us no time to relax, he offers two responses. The first is that perhaps people are too unwilling to act and would be better served by being more spurred on. The second is that it may be that relaxing today allows us to do ten times the good tomorrow.
But take this and his support for rules of thumb on one side and his support for executing the innocent man, or long spiel on how a bunch of people wireheading wouldn’t be that bad (a spiel that convinced me, I might add) and I’m left with an unclear overall picture. As an all-is-fine defence of utilitarianism, it doesn’t go far enough. As a bracing lecture about our degenerate non-utilitarian ways, it also doesn’t go far enough.
Leaving, I suppose, the sincere views of a man who pondered utilitarianism for much longer than I have. Chance is the only reason that makes sense. This would imply that sometimes Smart gives a nod to traditional morality because he’s decided it aligns with his utilitarian ethics. Other times, he disagrees. At length. Maybe Smart is a man seeking to rescue what precious moral truths he can from the house fire that is utilitarianism.
Perhaps some of my confusion comes from another confusion, one that seems to have subtly infected many utilitarians. Smart is careful to point out that the atomic belief underlying utilitarianism is general benevolence. Benevolence, note, is not altruism. The individual utilitarian matters just as much – or as little – as everyone else. Utilitarians in Smart’s framework have no obligation to run themselves ragged for another. Trading your happiness for another’s will only ever be an ethically neutral act to the utilitarian.
Or, I suspect, the wrong one. You are best placed to know yourself and best placed to create happiness for yourself. It makes sense to include some sort of bias towards your own happiness to take this into account. Or, if this feels icky to you, you could handle it at the level of probabilities. You are more likely to make yourself happy than someone else (assuming you’ve put some effort towards understanding what makes you happy). If you are 80% likely to make yourself happy for an evening and 60% likely to make someone else happy, your clear utilitarian duty is to yourself.
This is not a suggestion to go become a hermit. Social interactions are very rarely as zero sum as all that. It might be that the best way to make yourself happy is to go help a friend. Or to go to a party with several people you know. But I have seen people risk burnout (and have risked it myself) by assuming it is wrong to take any time for themselves when they have friends in need.
This is all my own thoughts, not Smart’s. For all of his talk of utilitarianism, he offers little advice on how to make it a practically useful system. All too often, Smart retreats to the idea of measuring the total utility of a society or world. This presents a host of problems and begs two important questions.
First, can utility be accurately quantified? Smart tries to show that different ways of measuring utility should be roughly equivalent in qualitative terms, but it is unclear if this follows at a quantitative level. Stability analysis (where you see how sensitive your result is to different starting assumptions) is an important tool for checking the veracity of conclusions in engineering projects. I have a hunch that quantitatively, utilitarian results to many problems will be highly unstable when a variety of forms of utilitarianism are tried.
Second, how should we deal with utility in the future? Smart claims that beyond a certain point we can ignore side effects (as unintended good side effects should cancel out unintended ill side effects; this is especially important when it comes to things like saving lives) but that doesn’t give us any advice on how we can estimate effects.
We are perhaps saved here by the same collapse that aligned normal utilitarians with negative utilitarians. If we cannot quantify joy, we can sure quantify misery. Doctors can tell you just how much quality of life a disease can sap (there are tables for this), not to mention the chances that a disease might end a life outright. We know the rates of absolute poverty, maternal deaths, and malaria prevalence. There is more than enough misery in the world to go around and certainly utilitarians who focus on ending misery do not seem to be at risk of being out an ethical duty any time in the near future.
(If ending misery is important to you, might I suggest donating a fraction of your monthly income to one of GiveWell’s top recommended charities? These are the charities that most effectively use money to reduce suffering. If you care about maximizing your impact, GiveWell is a good way to do it.)
Although speaking of the future, I find it striking how little utilitarianism has changed in the fifty-six years since Smart first wrote his essay. He pauses to comment on the risk of a recursively self-improving AI and talk about the potential future moral battles over factory farming. I’m part of a utilitarian meme group and these are the same topics people joke about every day. It is unclear if these are topics that utilitarianism predisposes people to care about, or if there was some indirect cultural transmission of these concerns over the intervening years.
There are many more gems – and frustrations in Smart’s essay. I can’t cover them all without writing a pale imitation of his words, so I shan’t try any more. As an introduction to the different types of utilitarianism, this essay was better than any other introduction I’ve read, especially because it shows all of the ways that various utilitarian systems fit together.
As a defense of utilitarianism, it is comprehensive and pragmatic. It doesn’t seek to please everyone and doesn’t seek to prove utilitarianism. It lays out the advantages of utilitarianism clearly, in plain language, and shows how the disadvantages are not as great as might be imagined. I can see it being persuasive to anyone considering utilitarianism, although in this it is hampered by its position as the first essay in the collection. Anyone convinced by it must then read through another seventy pages of arguments against utilitarianism, which will perhaps leave them rather less convinced.
As a work of academic philosophy, it’s interesting. There’s almost no meta-ethics or meta-physics here. This is a defense written entirely on its own, without recourse to underlying frameworks that might be separately undermined. Smart’s insistence on laying out his arguments plainly leaves him little room to retreat (except around average vs. total happiness). I’ve always found this a useful type of writing; even when I don’t agree, the ways that I disagree with clearly articulated theses can be illuminating.
It’s a pleasant read. I’ve had mostly good luck reading academic philosophy. This book wasn’t a struggle to wade through and it contained the occasional amusing turn of phrase. Smart is neither dry lecturer nor frothing polemicizer. One is put almost in the mind of a kindly uncle, patiently explaining his way through a complex, but not needlessly complicated subject. I highly recommend reading it and its companion.
[Content Warning: Effective Altruism, the Drowning Child Argument]
I’m a person who sometimes reads about ethics. I blame Catholicism. In Catholic school, you have to take a series of religion courses. The first two are boring. Jesus loves you, is your friend, etc. Thanks school. I got that from going to church all my life. But the later religion classes were some of the most useful courses I’ve taken. Ever. The first was world religions. Thanks to that course, “how do you know that about [my religion]?” is a thing I’ve heard many times.
The second course was about ethics, biblical analysis, and apologetics. The ethics part hit me the hardest. I’d always loved systematizing and here I was exposed to Very Important Philosophy People engaged in the millennia long project of systematizing fundamental questions of right and wrong under awesome sounding names, like “utilitarianism” and “deontology”.
I’ve learned (and wrote) a lot more about ethics since those days and I’ve read through a lot of thought experiments. When it comes to ethics, there seems to be two ways a thought experiment can go; it can show that an ethical system conflicts with our moral intuitions, or it can show that an ethical system fails to universalize.
Take the common criticism of deontology, that the Kantian moral imperative to always tell the truth applies even when you could achieve a much better outcome with a white lie. The thought experiment that goes with this point asks us to imagine a person with an axe intent on murdering our best friend. The axe murderer asks us where our friend can be found and warns us that if we don’t answer, they’ll kill us. Most people would tell the murderer a quick lie, then call the police as soon as they leave. Deontologists say that we must not lie.
Most people have a clear moral intuition about what to do in a situation like that, a moral intuition that clashes with what deontologists suggest we should do. Confronted with this mismatch, many people will leave with a dimmer view of deontology, convinced that it “gets this one wrong”. That uncertainty opens a crack. If deontology requires us to tell the truth even to axe murderers, what else might it get wrong?
The other way to pick a hole in ethical systems is to show that the actions that they recommend don’t universalize (i.e. they’d be bad if everyone did them). This sort of logic is perhaps most familiar to parents of young children, who, when admonishing their sprogs not to steal, frequently point out that they have possessions they cherish, possessions they wouldn’t like stolen from them. This is so successful because most people have an innate sense of fairness; maybe we’d all like it if we could get away with stuff that no one else could, but most of us know we’ll never be able to, so we instead stand up for a world where no one else can get away with the stuff we can’t.
All of the major branches of ethics fall afoul of either universalizability or moral intuitions in some way.
Deontology (doing only things that universalize and doing them with pure motives) and utilitarianism (doing whatever leads to the best outcomes for everyone) both tend to universalize really well. This is helped by the fact that both of these systems treat people as virtually interchangeable; if you are in the same situation as I am, these ethical systems would recommend the same thing for both of us. Unfortunately, both deontology and utilitarianism have well known cases of clashing with moral intuitions.
Egoism (do whatever is in your self-interest) doesn’t really universalize. At some point, your self-interest will come into conflict with the self-interest of other people and you’re going to choose your own.
Virtue ethics (cultivating virtues that will allow you to live a moral life) is more difficult to pin down and I’ll have to use a few examples. On first glance, Virtue ethics tends to fit in well with our moral intuitions and universalizes fairly well. But virtue ethics has as its endpoint virtuous people, not good outcomes, which strikes many people as the wrong thing to aim for.
For example, a utilitarian may consider their obligation to charity to exist as long as poverty does. A virtue ethicist has a duty to charity only insofar as it is necessary to cultivate the virtue of charity; their attempt to cultivate the virtue will run the same course in a mostly equal society and a fantastically unequal one. This trips up the commonly held moral intuition that the worse the problem, the greater our obligation to help.
Virtue ethics may also fail to satisfy our moral intuitions when you consider the societal nature of virtue. In a world where slavery is normalized, virtue ethicists often don’t critique slavery, because their society has no corresponding virtue for fighting against the practice. This isn’t just a hypothetical; Aristotle and Plato, two of the titans of virtue ethics defended slavery in their writings. When you have the meta moral intuition that your moral intuitions might change over time, virtue ethics can feel subtly off to you. “What virtues are we currently missing?” you may ask yourself, or “how will the future judge those considered virtuous today?”. In many cases, the answers to these questions are “many” and “poorly”. See the opposition to ending slavery, opposition to interracial marriage, and opposition to same-sex marriage for salient examples.
It was so hard for me to attack virtue ethics with moral intuitions because virtue ethics is remarkably well suited for them. This shouldn’t be too surprising. Virtue ethics and moral intuitions arose in similar circumstances – small, closely knit, and homogenous groups of humans with very limited ability to affect their environment or effect change at a distance.
Virtue ethics work best when dealing with small groups of people where everyone is mutually known. When you cannot help someone half a world away, it really only does matter that you have the virtue of charity developed such that a neighbour can ask for your help and receive it. Most virtue ethicists would agree that there is virtue in being humane to animals – after all, cruelty to other animals often shows a penchant for cruelty to humans. But the virtue ethics case against factory farming is weak from the perspective of the end consumer. Factory farming is horrifically cruel. But it is not our cruelty, so it does not impinge on our virtue. We have outsourced this cruelty (and many others) and so can be easily virtuous in our sanitized lives.
Moral intuitions are the same way. I’d like to avoid making any claims about why moral intuitions evolved, but it seems trivially true to say that they exist, that they didn’t face strong negative selection pressure, and that the environment in which they came into being was very different from the modern world.
Because of this, moral intuitions tend to only be activated when we see or hear about something wrong. Eating factory farmed meat does not offend the moral intuitions of most people (including me), because we are well insulated from the horrible cruelty of factory farming. Moral intuitions are also terrible at spurring us to action beyond our own immediate network. From the excellent satirical essay Newtonian Ethics:
Imagine a village of a hundred people somewhere in the Congo. Ninety-nine of these people are malnourished, half-dead of poverty and starvation, oozing from a hundred infected sores easily attributable to the lack of soap and clean water. One of those people is well-off, living in a lovely two-story house with three cars, two laptops, and a wide-screen plasma TV. He refuses to give any money whatsoever to his ninety-nine neighbors, claiming that they’re not his problem. At a distance of ten meters – the distance of his house to the nearest of their hovels – this is monstrous and abominable.
Now imagine that same hundredth person living in New York City, some ten thousand kilometers away. It is no longer monstrous and abominable that he does not help the ninety-nine villagers left in the Congo. Indeed, it is entirely normal; any New Yorker who spared too much thought for the Congo would be thought a bit strange, a bit with-their-head-in-the-clouds, maybe told to stop worrying about nameless Congolese and to start caring more about their friends and family.
If I can get postmodern for a minute, it seems that all ethical systems draw heavily from the time they are conceived. Kant centred his deontological ethics in humanity instead of in God, a shift that makes sense within the context of his time, when God was slowly being removed from the centre of western philosophy. Utilitarianism arose specifically to answer questions around the right things to legislate. Given this, it is unsurprising that it emerged at a time when states were becoming strong enough and centralized enough that their legislation could affect the entire populace.
Both deontology and utilitarianism come into conflict with our moral intuitions, those remnants of a bygone era when we were powerless to help all but the few directly surrounding us. When most people are confronted with a choice between their moral intuitions and an ethical system, they conclude that the ethical system must be flawed. Why?
What causes us to treat ancient, largely unchanging intuitions as infallible and carefully considered ethical systems as full of holes? Why should it be this way and not the other way around?
Let me try and turn your moral intuitions on themselves with a variant of a famous thought experiment. You are on your way to a job interview. You already have a job, but this one pays $7,500 more each year. You take a short-cut to the interview through a disused park. As you cross a bridge over the river that bisects the park, you see a child drowning beneath you. Would you save the child, even if it means you won’t get the job and will have to make due with $7,500 less each year? Or would you let her drown and continue on the way to your interview? Our moral intuitions are clear on this point. It is wrong to let a child die because we wish to more money in our pockets each year.
Can you imagine telling someone about the case in which you don’t save the child? “Yeah, there was a drowning child, but I’ve heard that Acme Corp is a real hard-ass about interviews starting on time, so I just waltzed by her.” People would call you a monster!
Yet your moral intuitions also tell you that you have no duty to prevent the malaria linked deaths of children in Malawi, even you would be saving a child’s life at exactly the same cost. The median Canadian family income is $76,000. If a family making this amount of money donated 10% of their income to the Against Malaria Foundation, they would be able to prevent one death from malaria every year or two. No one calls you monstrous for failing to prevent these deaths, even though the costs and benefits are exactly the same. Ignoring the moral worth of people halfway across the world is practically expected of us and is directly condoned by our distance constrained moral intuitions.
Your moral intuitions don’t know how to cope with a world where you can save a life half the world away with nothing more than money and a well-considered donation. It’s not their fault. They didn’t develop for this. They have no way of dealing with a global community or an interconnected world. But given that, why should you trust the intuitions that aren’t developed for the situation you find yourself in? Why should you trust an evolutionary vestige over elegant and well-argued systems that can gracefully cope with the realities of modern life?
I’ve chosen utilitarianism over my moral intuitions, even when the conclusions are inconvenient or trulyterrifying. You can argue with me about what moral intuitions say all you want, but I’m probably not going to listen. I don’t trust moral intuitions anymore. I can’t trust anything that fails to spur people towards the good as often as moral intuitions do.
Utilitarianism says that all lives are equally valuable. It does not say that all lives are equally easy to save. If you want to maximize the good that you do, you should seek out the lives that are cheapest to save and thereby save as many people as possible.
The Slate Star Codex post is a response to a piece Ken put up after the furor around Justine Sacco’s tweets a few years back. Ken is defending the right of everyone else on Twitter to say whatever they like in response to Justine Sacco’s thoughtless tweets. The particular part Scott highlights is:
The phrase “the spirit of the First Amendment” often signals approaching nonsense. So, regrettably, does the phrase “free speech” when uncoupled from constitutional free speech principles. These terms often smuggle unprincipled and internally inconsistent concepts — like the doctrine of the Preferred First Speaker. The doctrine of the Preferred First Speaker holds that when Person A speaks, listeners B, C, and D should refrain from their full range of constitutionally protected expression to preserve the ability of Person A to speak without fear of non-governmental consequences that Person A doesn’t like. The doctrine of the Preferred First Speaker applies different levels of scrutiny and judgment to the first person who speaks and the second person who reacts to them; it asks “why was it necessary for you to say that” or “what was your motive in saying that” or “did you consider how that would impact someone” to the second person and not the first. It’s ultimately incoherent as a theory of freedom of expression.
Scott disagrees. He argues that there is a spirit of the First Amendment and it’s summed up by Eliezer Yudkowsky with: “Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.”
Scott asks to imagine at what point damaging responses become appropriate:
What does “bullet” mean in the quote above? Are other projectiles covered? Arrows? Boulders launched from catapults? What about melee weapons like swords or maces? Where exactly do we draw the line for “inappropriate responses to an argument”?
Scott’s eventual line in the sand is: “Bad argument gets counterargument. Does not get bullet. Does not get doxxing. Does not get harassment. Does not get fired from job. Gets counterargument. Should not be hard.”
I’m sympathetic to what Scott was trying to do here, but ultimately, I’m on the side of Ken.
Scott wants to talk about the spirit of the First Amendment, which is fine. But the spirit he wants to read into it is divorced from the reality of constitutional rights. The First Amendment, like many of the rights in the US Constitution, is a negative right – it prevents the government from acting in a certain way, rather than saying it must provide people with a certain thing. The US Government can’t stop you from saying what you want, but it has no obligation to make you heard. If everyone ignores you, the government will not intervene.
It’s pretty weird to try and read a positive spirit into a negative right. The framers of the Bill of Rights knew when the rights they were setting down were negative rights. They understood the difference between negative and positive rights. To claim that the spirit of a definitely negative right is actually positive feels like an unfair attempt to halo a set of normative ethics (or perhaps aesthetics) with the positive affect that many Americans hold for their constitution.
As far as the government is concerned, as long as people are debating and silencing through legal means, there actually isn’t a distinction between trying to debate and trying to silence. Neither type of speech can be stopped. And I think it’s trivially easy to come up with examples for why neither should be stopped as a matter of routine (if you need inspiration, think of what your worst political enemies call “hate speech” and shudder about it being banned).
Luckily, negative speech and association rights and the government monopoly on force means that it is really hard to credibly threaten people’s freedom of association, so Scott is free to build a subculture that shares his beliefs about normative ethics. A subculture is free to demand positive rights for all members within the context of subculture related discussions and has free association as the perfect tool for enforcing it.
I’m glad that this is what rationalists are trying to do and I like our subculture and all, but we can’t claim that our weird norms are universal positive rights. I know this is a common thing for subcultures to do, but it’s embarrassing.
Remember Horseshoe Theory? It’s the observation that in many ways, the extremist wings of political movements resemble each other more than centrists or their more moderate brethren. We see this in anti-Semitism, for example. In any given week this year, you’re about as likely to see anti-Semitism come from Stormfront… or the British Labour Party.
I’ve been thinking about horseshoe theory in light of another issue: the police. Let me explain.
I disagree strongly with calls to abolish the police. It’s not that I’m a great fan of the police: I’m a member of the Canadian Civil Liberties Union and I believe in strong checks and balances on law enforcement power. It’s just that one lesson we’ve learned repeatedly over the past century is that radical change to public institutions rarely goes smoothly. We should always remember caution when people suggest tearing up everything we already know without really planning for what will happen next.
So despite high profile incidents of unjustified police violence, I support the state’s monopoly on the means of violence. Beyond simple caution, here are my reasons.
Violence has been with us forever. War is rightfully one of the four horsemen of the apocalypse, one of those four almost primal forces responsible for killing so many humans. Trying to reduce violence is important. But it isn’t the only fight. Any policy proposal sees diminishing returns. Beyond a certain point, effort that could be spent reducing violence could more effectively improve lives through other means (for example, by fighting malaria, or global warming).
We could reduce violence conducted by the state by abolishing the police. But state violence is a useful lever for other policy priorities. Trying to reach other goals (like economic equality or public order) are often worth some risk of state violence.
This process of trading-off must be undertaken by each body politic, as willingness to tolerate risk differs between countries. Canada, America, and Switzerland, for example, all have accepted higher rates of gun violence than other developed countries in exchange for more freedom to own and use firearms.
People generally have a right to own whatever they want to own. People also have a right not to be randomly shot. With guns, these two rights can be in conflict. The more people who have guns, the more likely I am to be randomly shot. Society has to come together and negotiate a trade-off between these two rights that they can (collectively) stomach. The weird thing about these negotiated trade-offs is that they can look ridiculous, even from inside of one (ask any American liberal how they feel about gun rights and you’ll see what I mean). It is certainly possible to have values such that no amount of firearm ownership is justifiable if it leads to deaths. Just as it is possible to have values such that no amount of intoxicant usage is permissible if it leads to death. 
Like intoxicants or guns, society must negotiate on the amount of violence it will permit. These negotiations are most convenient when they can be done with a single organization, or a single umbrella group. Consider, for example, the relative difficultly of abolishing the death penalty (one form of violence undertaken by states) in Singapore, America, and Syria.
In Singapore, abolishing the death penalty would be relatively simple (not to be confused with easy). There is one organization (the city-state) with an absolute monopoly on violence. To abolish the death penalty, lobbyists can focus their effort on one group of people. They will probably be opposed, because any organization who wishes to keep the death penalty will also know exactly who to lobby. This isn’t so much a strength or weakness as it is the endpoint of yet another negotiation. Singapore has chosen a system of government where people only need to worry about one set of rules. This is a sensible choice for a small, densely populated island without a lot of local variation.
In America, there are fifty-one authorities that must be lobbied in order to abolish the death penalty. Each state has a limited monopoly on violence solely within its borders (and therefore controls crime and punishment within them). But there is also a federal government that has a separate limited monopoly on violence, in this case, violence across state lines or against the union as a whole. In such a system, it is perhaps easier for opponents of certain types of violence to see them abolished in one region or another (see, for example, the death penalty in Massachusetts), but much harder to see it abolished across the nation as a whole.
I should mention that this isn’t just a matter of scale or population size. Canada is also a federal democracy, but the monopoly on violence is held solely by the federal government. Therefore, there was only one organization that had to be convinced to end the death penalty.
Imagine now trying to abolish the death penalty in Syria. You would have to negotiate with the Assad Regime, the Kurds, Daesh, Al-Nusra, and the scores of small rebel groups that hold and administer territory. Not only will you face difficulty in each negotiation, you will face difficulty even trying to negotiate, because there is no umbrella organization with the means to force smaller subdivisions of political power to allow you freedom of movement or guarantee minimum rights. This is a different situation than in America, where the federal government uses (what is ultimately) the threat of violence to ensure that states allow the free flow of commerce, ideas, and people.
A single organization (or set of franchises) with a monopoly on violence doesn’t just make it easier to target specific cases of violence. It can in fact reduce the overall amount of violence in a society simply by virtue of existing. This is the other reason that Syria sees much more violence than polities where there is an organization that holds a monopoly on violence. As long as no organization exists to use the threat of violence to force other actors to refrain from violence – to jealously guard its own monopoly on violence, as it is – then these actors will use violence in disagreements with each other.
In a civil war, the central government loses its monopoly on violence and other actors attempt to use violence to gain their own monopoly. We see the same pattern of increasing violence in the Mexican Drug Trade. Aggressive government enforcement broke cartel monopolies on local violence, allowing for various groups to fight to attempt to create their own hegemony.
In the context of police violence, having one group to negotiate with is extremely useful. It means that there’s only one battle to be fought. And in constitutional democracies, it gives reformers a powerful weapon by way of the court system. The courts may force (using the threat of violence) individual police departments to conform to certain practices. Imagine a country instead with only private security forces and a court system without access to the threat of violence. It would be impossible to enforce any rulings on these private security forces.
Abolishing the police will not abolish people’s desire for protection. Leftists should be scared of unaccountable private security firms. Anyone who loves peace and order should be scared of the conflicts between these firms.
17th Century Philosophy
There is a very short list of political philosophers whose works have shaped and guided revolutions. To have written works that inspire such drastic change in society doesn’t require or even suggest correctness. But it does suggest an understanding of the values that people hold closest to their hearts.
During Locke’s life, there was open debate among philosophers as to the “state of nature” – the shape human existence would take without government or laws. The state of nature was an artificial construct. It shares more with the ideal zero energy state used in molecular dynamics simulations than it does with prehistorical societies; it’s a baseline to compare political arrangements with, much as zero energy states are a baseline to compare molecular arrangements with.
Hobbes famously claimed that in the state of nature life was “solitary, poor, nasty, brutish, and short” – a war of all against all. On the other hand, Jean-Jacques Rousseau believed that the state of nature was the only state of true freedom; to him it was much preferable to life in the eighteenth century.
John Locke held a different view. He believed that the state of nature was generally pleasant – in the state of nature, all people had the rights “to order their actions, and dispose of their possessions and persons, as they think fit, within the bounds of the law of nature.” These “natural laws” might be broken by some people, Locke reasoned, at which point all people would have a right to punish them for their transgressions (as you can see, Locke was a Christian philosopher and his work is riddled with references to The Almighty; a less religious appeal to natural law would be an appeal to the moral impulses that seem to be more or less universal).
Locke did see one problem with this set-up. In most cases, those most likely to pursue justice would be the aggrieved party. While Locke believed that natural law gave everyone a right to punish wrongdoers, he also believed that in practice punishment would come from those they wronged. Locke understood that people were imperfect and not always capable of mercy nor proportionality. So Locke reasoned that justice could not exist without society and the people society appoints to mete it out.
Locke’s judges would by necessity need some force of bailiffs to assist them. There is an enormous amount of practical tasks that need to be done for judges to do their jobs. Suspects must be apprehended and interrogated, witnesses interviewed, physical evidence collected, and crimes investigated. These tasks must also be undertaken by someone other than the aggrieved party for there to be any chance at fairness. This is where police come in.
I don’t believe that the police are the only thing preventing us from existing in Hobbes’s state of nature. People are basically good and just. But they are also flawed and imperfect, closer to monkeys than gods. I also don’t believe in Rousseau’s claims of an earthly paradise; institutions do too much good for me to believe that life would improve without them (although, had I lived when he did, I may have felt differently). Locke, Locke I believe got it right. Without government, most people would be good, help their neighbours, and continue as they always had. But some people would take what isn’t theirs or hurt others.
I’ve heard total equality bandied about as a solution to the problem of violence and theft in the absence of the police. The logic goes that if everyone had total equality, we wouldn’t need police. This isn’t a real solution. Inequality currently exists. There is no way to redistribute possessions that isn’t coercive. You’re not going to convince Peter Thiel to give away his possessions out of the goodness of his heart (he doesn’t have one, except in the literal sense). The only way to force him to give money away is through the threat of force. This is impossible without an organization capable of carrying through on that threat. All legislation, whether it’s criminal law, CO2 emissions targets, or consumer protection, relies ultimately on the threat of violence against those who don’t follow it. Redistributive legislation – taxation – is no different.
Perhaps we could achieve equality and then abolish the police. But equality is a disequilibrium. Even if all skills were equally in demand (they aren’t) and all people equally capable of work (they aren’t), innate differences in desire for work or possessions would remain. Some people would work more – and presumably be rewarded more – than others. Even at the height of collectivism in communist Russia, with private ownership of any means of production outlawed, people found ways to game the system or took to the black market to accrue wealth. Equality can’t last without someone to enforce it, violently if it comes to that. You can call these enforcers whatever you want, but they will always be essentially ‘the police’.
Leaving that problem aside, there is no evidence that equality would stop all crime. In a society that undergoes radical transformation, there would be sore losers, willing to fight to get their old power back. There would also be all the crime that has nothing to do with wealth or possessions. Equality can’t stop murders committed by jealous spouses, road rage, hate crimes, vicious bullying, and a host of other crimes that draw their motive from something other than worldly possessions.
So this society without police would have to deal with crime. John Locke’s theories on the state of nature show us how this would fail. Justice, if it could even be called that, would become a private good, available to those with the resources to pay for it (admittedly, not a problem if you’re violently enforcing equality) or the wherewithal to do it themselves.
Without the police, people would have to seek their own justice. And they’d do it poorly. Given that society (at least, every society I know of) is racist, can we really expect individual people to do it any better than the police? Imperfect due process (and I know the due process counts for far less when you aren’t white) is surely better than none. Without the police, people of colour face a nation of George Zimmermans.
FiveThirtyEight.com has looked at violent crime data out of Chicago after the video of Laquan McDonald’s murder was released. They found a (statistically) significant increase in violent crimes, correlated with a decrease in proactive police behaviour (here measured by a decrease in police patrols and stops). They weren’t able to tease out the root cause of the decrease in proactive policing (it could have been the release of the new video or an increase in the amount of paperwork officers now must do after interacting with the public). The increase in violent crime bucks seasonal trends and can’t be blamed on a warmer than average winter – winters even warmer than the last one have seen no large spike in deaths.
This should not be surprising in light of the earlier sections. When the police are proactive, it is clear that the state has a monopoly on violence and is willing to use it. But as the police retreat and arrests go down, we see both the effects of different groups competing to fill the void and reprisal killings (which are much more difficult when suspects are behind bars).
I don’t wish to say that the answer to all violent crime is more police patrols and more random stops. As the FiveThirtyEight article points out, there are costs associated with proactive policing. Sometimes police tactics labelled as proactive are also unconstitutional. Opposing unconstitutional police tactics – even if they reduce violence – is one of the trade-offs around violence I discussed earlier and one I strongly endorse. If alienation, segregation, and police violence is the price we pay for a reduction in violence through proactive policing, then I would believe it to be a price not worth paying. Some police tactics should be off the table in a free and democratic society, even if they provide short term gains.
But if, on the other hand, proactive policing saves lives without damaging communities and breeding alienation, then I would oppose rolling back these policies. One article in a newspaper – even one renowned for its statistical acumen – isn’t enough to drive public policy. More research on the costs and benefits of various policing programs, including controlled studies is desperately needed. To this end, the lack of a centralized police shooting database in the United States is both a national tragedy and a national disgrace.
A Legitimate State Monopoly Over the Means of Violence
The modern definition of a state acknowledges that it must have a monopoly on the means of violence within a territory. Without this monopoly, a state is powerless to do most of the things we associate with a state. It cannot enforce contracts or redistribute wealth. It cannot protect the environment or private property rights. I have yet to see a single serious policy proposal that adequately addresses how these could be accomplished without police.
This is all not to say that the current spate of police shootings is tolerable or should be tolerated. Free and open societies can and must expect better behaviour from those they empower with the ability to use violence in undertaking the aims of the state.
As citizens of a free and democratic society, we should continue to pressure our leaders to accept and perpetrate less violence. But we also must acknowledge that the bedrock our society is built on is the threat of physical force. This doesn’t make our society inherently illegitimate, but it does mean we must always be contemplative whenever we empower anyone to use that force – even if they’re people we otherwise agree with and especially when force is used primarily against the most vulnerable members of society.
We should fight for a society where the government holds only a legitimate monopoly on the means of violence. Where violence is used only when truly necessary and not a moment sooner. Where security forces are truly subservient to civilian leaders. Where police shootings of unarmed civilians are an aberration, not a regular occurrence. We aren’t there yet. But we could be.
 Trade-offs between different rights are the proper territory of legislation and acknowledging this is separate from the harmful moral relativism that has infected leftist rhetoric on international relations. There is a distinct difference between trade-offs among competing rights and a fearful refusal to acknowledge universal and inalienable human rights.
I remain genuinely unsure what Kellie Leitch’s goal is. I went into this blog convinced she was another hypocrite who was only using queer Canadians when it suited her racists agenda. And yet, she voted yea to Bill 279 (to treat gender identity as a protected class) despite almost every single one of her cabinet colleagues opposing it. She does appear to have a principled and reasonably long standing support for queer rights. She voted the party line on whipped bills (as does basically every MP in Canada), but when she’s allowed to vote her conscience, we see that it is rather different than many of the other Conservatives. She may be a political opportunist who can sense which way the wind blows. Or she may be trying to change the conservatives from within.
I spent weeks wondering: is Dr. Leitch just a political opportunist, or is she driven by real (albeit misguided) principles? This week, she provided me with an answer :
“Tonight, our American cousins threw out the elites and elected Donald Trump as their next president. It’s an exciting message and one that we need delivered in Canada as well. It’s the message I’m bringing with my campaign to be the next Prime Minister of Canada.”
So political opportunist it is then.
Let’s be clear, Kellie Leitch isn’t Donald Trump. She’s calculating and clever. She isn’t going to get embroiled in pointless feuds. People are genuinely worried that Trump might declare a literal shooting war if a foreign leader tweets the wrong thing at him. No one is seriously concerned Kellie Leitch would do the same.
Even if (hypothetical) Prime Minister Kellie Leitch governs soundly and sensibly, even if she never enacts a tip-line for “barbaric cultural practices” and never sets up screening for “anti-Canadian values” , her candidacy or victory represents a real risk to black, indigenous, southeast Asian, and Muslim Canadians. As much as we’d love to believe otherwise, there are dangerous racists in Canada. A win for Kellie Leitch on a platform of “Canadian Values” and coded anti-Muslim rhetoric would give this small minority social license to harass, attack, and intimidate. A win by Michael Chong or Eric O’Toole would not.
Unfortunately, there is a real risk that Kellie Leitch could become the next leader of the Conservative party (and from there, possibly PM). It’s a crowded field and she’s learned the correct lessons from Donald Trump. Milk every controversy for as much media attention as possible and strongly appeal to the parts of your base most concerned with the changing appearance of Canada.
It’s rich that Kellie Leitch, who received her bachelor’s at Queens, holds an MD and an MBA, and has worked as a surgeon, professor, MP, and cabinet minister, can campaign on a message that the “elites” need to go. A politics without elites would by necessity be a politics without Dr. Leitch.
But this only scratches the surface of my disagreements with Dr. Leitch; I oppose every policy in her platform. I think her plan to put an absolute cap on government spending is silly. The government needs the flexibility to meet any obstacles it faces. Prime ministers from Pierre Elliot Trudeau to Brian Mulroney to Steven Harper all understood this. I oppose her stance on marijuana – I think prohibition doesn’t work and most Canadians agree with me. Myself and others think that her proposed screening for anti-Canadian values in immigrants is easily subverted and a solution in search of a problem.
I encourage everyone else who opposes Dr. Leitch to focus on her policies and why they’re bad for Canada. Insofar as our values differ from those of Dr. Leitch, we should take the time to explain why. We should seek dialogue with her supporters and seek to allay their fears. We should be proud defenders of globalization and immigration and all the benefits they have brought. We should not retreat into our filter bubbles and dismiss the rest of Canada as the wrong kind of people. That kind of retrenchment doesn’t have the best track record right now.
I think there are much better candidates in the conservative leadership race. Michael Chong, for example, has an excellent record on social issues and supports carbon pricing. He and I have policy disagreements, but a Conservative Party of Canada led by Michael Chong would be a contender for my vote in the next election. Given that the NDP has abandoned me, I would dearly like to be able to make a choice between two parties with sound policy proposals and positive plans for Canada going forward. I could not do that with Kellie Leitch at the head of the Conservative Party.
4 Things You Can Do To Help
Kellie Leitch is relying on free media attention to differentiate her from a crowded field. Under no circumstances should we advocate for deliberate suppression of stories about Dr. Leitch. And yet, outrage generates clicks. As long as Kellie Leitch can profit from her simple algorithm – say something objectionable, but not so objectionable that the party kicks you out, wait for the media to write a hundred stories about it, profit from the increased name recognition – she’ll continue to use it.
We can attempt to complicate her algorithm by removing the financial incentive to focus most of the media coverage that the CPC leadership race is getting on her. There are a few ways you can do this.
Promise yourself that you won’t share any news stories about her electronically. By all means, tell you friends. But don’t share it on your Facebook wall where it will generate clicks.
If you must visit a news article about Kellie Leitch (say to research for a blog post about her), visit with an ad-blocker. You’ll notice that I’ve used [N] style references throughout this post. Those are all links to recent stories about Kellie Leitch. I’d ask that anyone who cares about her not winning the leadership race not visit them without an ad-blocker.
Share this information with your friends. If they post a story about Kellie Leitch, gently tell them why this is a bad idea. Don’t get angry. Your friend is doing nothing morally wrong. But they are contributing to the outrage cycle and if you can stop it, that’s great. If they don’t understand the threat Kellie Leitch poses, show them some of the hate crimes that have been committed in America since Trump was elected and explain to them that Kellie Leitch winning an election would possibly have the same consequences. You can link them to this post if you’d like (I don’t have ads on my website and make no money if you do). Or you can show them how Trump’s win has already emboldened the alt-rightin Canada.
If you’re a member of the Conservative Party of Canada or plan to become a member before the leadership race membership cut-off of March 28, 2017, you can act more directly to ensure that Kellie Leitch does not win the vote. It doesn’t matter how you rank the candidates, as long as Kellie Leitch is ranked last (although if you care at all about climate change, you may want Brad Trost ranked low on your ballot as well).
Kellie Leitch has gained name recognition and a measure of popularity with her stances. But she’s also made a lot of enemies. She leads the field in both favourability and unfavourability ratings. The next leader of the Conservative Party will be picked using instant run-off with ranked ballots. If Kellie Leitch is at the bottom of most people’s ballots, she can’t win.
Let the next Canadian election be about which policies will bring us peace, order, and good government. Let’s not bring race and belonging into it.
Kellie Leitch related links (don’t visit without an ad-blocker):
When I first heard about deontology, I was intrigued. Here was an ethical system that could break you, if you weren’t careful. I was young and hadn’t really systematized my morality yet, but I dearly wanted to. I’d just learned about the stages of moral development and I felt a keen need to be at Kohlberg VI.
Time passed and I forgot that systematizing was a goal of mine. While I aimed for consistency across my moral principles, I did this largely blindly, lacking a single meta-principle to guide me.
Arendt had shown the weaknesses in deontology, shown how someone who didn’t think, who just followed the right as their society defined it could, with no irony, claim to be a Kantian while committing the most unimaginable crimes. At the same time, Arendt’s response to the judges, her justification for Eichmann’s death felt wrong to me. I never disagreed with Arendt more than when she said: “certain procedures… important in [their] own right can never be permitted to overrule justice, the law’s chief concern.”
I filled up the whole last page of Eichmann in Jerusalem with a cramped response to Arendt. I felt like her conception of justice was little better that vengeance and that justice couldn’t exist without the procedures she just disparaged.
Eichmann in Jerusalem left me with nagging questions and an empty space I yearned to fill. It would be a while before I had my answers.
First and Second Order Utilitarianism
The summer after reading Eichmann in Jerusalem, I flirted with utilitarianism. I wasn’t entirely satisfied with it. It’s not that I mind debating torture vs. dust specks or trying to select a value function. My problems were partially caused by the fact that I’m a romantic and utilitarianism is cold and utilitarian. But it’s also that I continue to worry about systems and precedents. For me, too many discussions about utilitarianism stick to the object level. I wanted to talk about the ripple effects of every decision and found often there was no room to.
One day, I found myself looking for high value books to read. One option was Utilitarianism: For and Against, a book I didn’t read until long after this post post was published. Luckily, even before I read it, it led me to the concept of precedent utilitarianism. Finally, I had a name to put to the nagging voice inside of my head. I read a quick summary of precedent utilitarianism and knew that I had the ethical system I was looking for.
Precedent Utilitarianism is a form of second-order utilitarianism. It doesn’t just look at first-order consequences of an action. It looks at the precedents an action sets.
I wrote an essay about justice that focused on precedents. In it, I make the claim that “precedent is what changes actions from unprecedented to normal”. This may sound facile or even tautological. But there is a deeper point I’m driving at. For every action now considered normal, there was someone who was the first to do it. In Eichmann in Jerusalem, Hannah Arendt mentions something similar in passing. She believes that that the recurrence of any crime is more likely than its invention.
Many actions are done once, then never again. Or only a few times, by a few isolated groups. Others get repeated and copied until they become the new normal. The Manson family murders did not lead to a sudden outbreak of murderous cults. But the actions of Marius and Sulla led almost directly to the Triumvirate and the unravelling of Roman democracy.
What makes Manson’s actions different than Sulla’s? It isn’t just that murder is more horrific than dictatorship. A cursory glance at the history of the last half-century of ethnic cleansings lends some credence to Arendt’s belief that after the Nazis systematized genocide many others would follow in their footsteps.
Why some crimes and not others? I think the answer to this question lies in part with the influence or charisma of the person setting the precedent. Hitler committed his grievous crimes at the helm of a country. Sulla was surrounded by patricians who wished that it had been them who seized Rome. Charles Manson has been influential in certain underground scenes. But he never led a country or commanded more than thirty people.
So what we currently know about precedents is: they can be set by any action and are more likely to be set by people who command a significant following. Oh and one final thing. In common law jurisdictions, every single judicial ruling sets a legal precedent, which is enforceable on all lower courts within the same jurisdiction. This is the most literal manifestation of a precedent, an action that is inscribed in law as allowed or disallowed, all because someone asked a judge to rule on it.
With the information we just gathered about precedents, we can create a second-order utilitarianism that incorporates them.
In theory, it’s pretty easy. You take whatever value function you prefer to use. You take the proposed action. You feed it into the value function to determine the utility of the action. This is just like first-order utilitarianism.
But in precedent utilitarianism, you then you think about how likely the action is to create a precedent and how many people the precedent could effect. If you’re not famous and you don’t expect your action to be well publicized, then you only need to worry about precedents set among your immediate acquaintances. If you’re the Prime Minister or President of an important country your audience will be considerably larger. And if you intend to defend your actions in the court of law in a common law jurisdiction, you must worry about the specific legal precedents you’ll potentially set. Legal precedents allow actions undertaken by a single person to be at least as momentous as those undertaken by a head of state. Just look at Oakes or Roe.
Once you know the how likely and how large, you need to think about who will use the precedent and how. If you think it is ethical for your preferred politician to cover up wrong-doing because you think there is a lot of utility in her being elected, remind yourself that if she gets away with it (for a while), then she’s set a precedent that may also be used by the politicians you despise.
Given all the people affected by the precedent, their chances of using it for various things, and the potential utility or disutility of these things, you can calculate an updated net utility for the action.
You may have noticed the problem. Utility function calculations beyond simple QALY evaluations are really hard. Adding in a bunch of hypothetical actions from a bunch of hypothetical people just makes it harder. And if the calculations are already impossible, it doesn’t do you much good to have an even harder set of calculations that you’re supposed to somehow pull off.
Precedent utilitarians (or utilitarians in general) would point out that the correct solution to calculations that literally take forever isn’t to spend forever doing them. There’s an opportunity cost to spending all your time thinking and none of it doing and this cost is considerable. The common solution is to do the best action you can see after a reasonable period of reflection and estimation of utility.
What represents a reasonable amount of time to spend on reflection and a reasonable resolution for the estimation depends on how important the decision is. Decisions about which restaurant to go to should be very quick and simple and largely guided by factors other than morality (for example, your local public health agency’s evaluations, or more reasonably, what kind of food you want to eat). Decisions like “where should I donate ten percent of my income” require a fair amount of reflection. But decisions like: “should we go to war with that dictator” require far more. The more potential there is to influence lives, the more it makes sense to sink resources into determining the optimal actions.
When it doesn’t make sense to spend dozens of hours on contemplation, there are a few simple heuristics that the precedent utilitarian can use.
First: is the action likely lead to an improvement in utility from a first-order utilitarian perspective? If the answer here is no and you don’t have an plausible mechanism for the action setting a precedent that will redeem the negative utility incurred in the first-order analysis, then you should trust the first order analysis and avoid the action.
Second: How potentially harmful is the action if generalized? If your worst enemy did the same thing, would it reduce the utility of the world? If you’re attempting to ban a certain sort of speech, for example, the general class of thing you’re doing is “banning speech”. I think we can all agree that the people we disagree with could ban speech in such a way that it would reduce the utility of the world. But if we’re making it illegal to assault someone, there are few ways that our foes can take “don’t hurt people who don’t want to be hurt” and make it reduce the utility of the world.
In general, the goal here is to consider ways that others acting along the same general principle could help or harm the world.
Third: Consider how strong a precedent you’re setting and how likely it is that others can also advocate along the same general principle now that you’ve made it easier. Remember also that special pleading (“no, you can only act along this principle in the ways we say you can”) and hypocrisy (getting angry at others who are doing the same thing you did, just from a different set of axioms and beliefs about the world) are very off-putting and can turn people against you.
The second heuristic deals with how your precedent can be used against you. The third heuristic with how likely this is to happen.
Fourth: Add this all up. If the precedent you set is safe (very difficult to use to decrease the utility of the world), your power is secure (the precedent is unlikely to be used in ways that you think will decrease utility), you’re unimportant (the precedent isn’t going to be used by anyone else), and your public support is non-fragile (you can survive hypocrisy or special pleading) then you can decide on first-order grounds. If a few of these aren’t true but you stand to gain a lot of utility, it remains safe to decide on first-order grounds. But if none of the conditions are met then it may very well be possible that you’d stand to lose net utility from second order effects. In this case, it probably makes sense to put your plan on hold while you spend more time calculating possible outcomes.
Other Ethical Systems
It’s a safe bet that most people aren’t utilitarians. It’s also true that you will eventually have to interact with people who are both not utilitarians and have different axioms. In both of these cases (but especially the second), it can be hard to productively express and argue about views. Some people avoid this problem entirely by embracing the comforting lie that those who disagree with them do so out of lack of education or stupidity. Alas, this uncharitable explanation is far too often just not the case. Sometimes you’re stuck arguing with someone who has beliefs that are just as internally consistent, logical, and evidence based as yours.
Precedent utilitarianism is very well suited to building up systems like liberal democracy, where differing groups can draft a mutually agreeable framework that allows them to live peacefully. Precedent utilitarians naturally look for principles that everyone can agree on and tend to support strong constitutional protections around many classes of actions that don’t affect other people.
On a smaller scale, precedent utilitarianism is useful when you need to convince someone with a different set of axioms or a differing ethical system that you are a reasonable person who is worth listening to. A natural effect of precedent utilitarianism is avoiding (in most cases) special pleading (whether out of desire to not alienate support, or because you’re worried about precedents your actions can set).
Avoiding special pleading makes you look principled. Someone can respect you arguing against one of their proposed plans of action (and give your arguments much more credence) if they’ve also seen you argue against other actions (especially ones they would expect you to support given your axioms) using the same general principle.
For example, if you’re a Catholic and are arguing against having Buddhist prayers at a town hall meeting you’ll have much more credibility if you have previously opposed having Catholic prayers read at town hall meetings (perhaps because you’re worried that it sets a precedent that could lead to other prayers being read, which might lead to less utility in terms of saved Catholic souls). If instead you’d previously argued in favour of Catholic prayers but are now arguing that separation of church and state preclude prayers in meetings, thein no one will take you seriously. Worse, they will probably have assorted ill feelings towards you, making you less effective at convincing them even in unrelated matters.
I want to give examples of the heuristics I discussed earlier in action. To make this essay interesting to people with a variety of axioms, I’ve picked two examples of legislative interventions proposed by different groups and argued against each intervention using the axioms (as best I understand them) of the people who I’ve observed suggesting it. First I’ll use activist left axioms. Then I’ll try and pass an ideological Turing test and pull off small government religious conservative axioms.
There is a growing clamour from leftists to shutdownpoliceunions. The logic goes that police unions advocate for the good of their members at the expense of society at large and most particularly, those already disadvantaged by race, sexual orientation, gender expression, poverty, mental illness, or a combination of these factors.
These activists generally believe that without the political clout and collective bargaining ability of police unions it would be easier to require officers to wear body cameras, easier to demilitarize the police, and easier to ban discriminatory practices like carding and stop and frisk. They also believe that without union representatives it would be much easier to suspend and fire officers suspected of misusing force.
Let’s assume (for the sake of argument) that activists are correct and dismantling police unions would reduce police violence. A reduction in police violence would lead to an increase in utility for almost any value function, as long as there weren’t direct effects that led to counterbalancing increases of violent crime. Let’s assume that even if there are some negative side effects, there is ultimately an increase in utility. This lets us move on to the second step.
(The proper utilitarian thing to do here would be look into studies and dataanalysis about what the likely crime effects of such a move would be. Because the focus of this essay is precedent utilitarianism, I’m not going to go into the nitty gritty here. I’m just going to do what the proponents do and assume everything will work out OK.)
The generalized action here is: “it is acceptable to weaken collective bargaining rights or forcibly de-unionize workers”. If you are a leftist, I want you to take a moment and imagine what sort of effects there would be if your worst enemy did this kind of thing.
If we could get rid of police unions without significantly risking other unions, then the analysis would probably come up positive (given our other assumptions). Unfortunately, it would probably take laws (and successive legal victories) to force police unions to disband or strip them of collective bargaining rights. There is no way to argue that laws and court cases don’t set precedents. Laws passed by previous governments give future government permission to legislate in the same space. And successful court cases (especially in common law jurisdictions) set the legal precedents that were discussed earlier. Courts cannot support disbanding a police union without setting the general precedent that unions may be forcibly disbanded.
In addition to creating one of the strongest possible precedents, abolishing police unions but demanding that no other unions be affected is a strong case of special pleading. In this specific case, there is even more potential for harm than in most, as a majority of Americans are confident in the police.
Looking at all of this, we’re looking at a potential for an increase in utility (assuming that there isn’t a protest or other work action from police that leads to rising crime rates), while setting a precedent we acknowledge is both strong and dangerous.
From a precedent utilitarian point of view, it seems unlikely that abolishing police unions will actually lead to any increase in utility. Instead, precedent utilitarians might focus on the outcomes they wish to see (increased use of body cameras, better use of force policies, more restrictions on discriminatory policing, funds for hiring more police officers from diverse backgrounds and diverse communities) and try to legislate them individually.
Of these, body cameras and hiring police officers from more diverse backgrounds (which can be spun to constituents as simply as “hiring more police officers”) seem the most likely to be easy to pass with broad support and probably represent the easiest starting place for a quick utility gain.
As above, I’m going to skip questions about the correctness of these beliefs or that they represent a gain in utility. Just as some people feel that forcing police to de-unionize will lead to a better world, some people feel that these bills will lead to a better world. Instead of disagreeing with these beliefs on the object level, I want to show that they are inconsistent with other conservative axioms and would fall under the class of beliefs that precedent utilitarianism suggests should be reject even if they’re based on correct axioms.
The generalized action here is: “it is acceptable for distant legislators to force lower levels of government to legislate as they would.” If you’re a conservative, I want you to take a moment and imagine what sort of effects there would be if your worst enemy did this kind of thing.
The first target would almost certainly be rural communities in otherwise liberal states, which tend to have much different laws around gun ownership and property taxes than the larger metropolises which make up the majority of the voter base.
Beyond guns and taxes, there are dozens of regulations that central liberal governments would love to impose upon rural conservatives. Look at what’s going on in Alberta for just one example. And would any conservative trust a liberal state government to protect the coal or fracking jobs on which so many rural communities survive? Living in a city, it’s far too easy to forget that these things have to come out of the ground somewhere.
If local bills could be overruled without setting any precedents, maybe there’s a utility gain to be had. But this seems unlikely. It almost certainly will require a few court cases to sort out which level of government has which power and once powers have been taken away from local government and given to the centralized government, they cannot be taken back. Politicians almost never let go of the power that they’ve fought so hard to gain.
Looking at all of this, we’re looking at an increase in utility from state laws overriding local non-discrimination ordinances, while also setting a strong precedent that states can override whatever local laws they don’t like; something we should acknowledge as dangerous and negative.
From a precedent utilitarian point of view, it seems unlikely that overriding local non-discrimination ordinances will lead to any increase in utility. Instead, precedent utilitarians with these axioms should focus on increasing tax breaks for religious schools or other social institutions they believe will push society in the direction they think it should go.
Back to myself: one principle of small government conservatism that I find laudable is the belief that local governments are best placed to fix problems. All too often central planners come up with ridiculous, unworkable ideas out of ignorance of the conditions on the ground. In addition to my grave concerns about the content of “religious freedom ordinances” or “bathroom bills”, I’ve been shocked to see conservatives suddenly advocating for solutions at the state level and liberals claiming that local people know best. And I’m not the only one.
I chose one of my examples very deliberately: to emphasize one of the weaknesses of precedent utilitarianism. People who are already privileged (like me!) are going to find it easiest to demand that potential changes must be considered and interrogated for bad precedents and abandoned if there is a chance that they might lead to enough disutility in the future.
It’s easy for me to urge caution around police unions. The police aren’t busy killing people who look like me. It’s easy for me to say that unprincipled exceptions should always be avoided. Unprincipled exceptions aren’t already being made at my expense. It’s reasonable to ask: “if they’re making exceptions for us, how come we can’t make exceptions for them.”
Pointing out that we wouldn’t have these problems if everyone already followed precedent utilitarianism doesn’t count as an argument. So what if it’s true? It wouldn’t change anything. The world should be engaged with as it is, not how we wish it to be. And we have to reckon with the fact that sometimes partially adopting an idea is worse than adopting none of it (see for example most arguments that start: “well, in a perfect libertarian society…”).
But this weakness isn’t unique to precedent utilitarianism. It’s a weakness of utilitarianism or of consequentialism more generally. Most constructions of utilitarianism place no inherent value on fairness, only value on some of the effects of fairness. Instead of trafficking in an ethical coin that is intuitively understood, they deal in cold, hard utility and disutility. Life years saved or lost, pleasure and pain, preferred and dispreferred states, all aggregated over the population of the world. These are the tools utilitarians have.
Precedent utilitarianism demands a deeper examination of consequences than some other constructions of utilitarianism. But it can’t change the fact that consequences are all utilitarians care about.
I advocate for precedent utilitarianism because I think that it doesn’t suffer from the problems of libertarianism. I don’t think even stumbling, imperfect precedent utilitarianism will lead to a worse state than the current one. But I don’t have proof. I can claim some institutions (the courts, liberalism) as obvious manifestations of precedent utilitarianism.
But this leaves two avenues of disagreement. First, you can claim that these are the by-product of something else and only have a serendipitous resemblance to precedent utilitarianism. Or you can claim that these are in fact not good things. It all depends on your axioms.
And this is all circular. People like me in positions of privilege tend to have axioms that assume their experience. Meanwhile, systematically disadvantaged people tend to have axioms that assume their experience.
Here’s what I have to try and convince you, even if there’s a huge gap between our axioms. Scott and Ozy often talk about ethical systems that fail gracefully. Imagine that you thought something or someone was bad and did everything permitted by your ethical system to stop it or them. Now imagine that you were wrong. How badly have you fucked up?
Precedent utilitarianism fails gracefully. Does your ethical system?