Literature, Philosophy, Politics

Book Review: On Violence

Many, including me, have relied on Max Weber’s definition of a state as “the rule of men over men based on the means of legitimate, that is allegedly legitimate violence”. I thought that violence was synonymous with power and that the best we could hope for was a legitimate exercise of violence, one that was proportionate and used only as a last resort.

I have a blog post about state monopolies on violence because of Hannah Arendt. Her book Eichmann in Jerusalem: A Report on the Banality of Evil was my re-introduction to moral philosophy. It, more than any other book, has informed this blog. To Arendt, thinking and judging are paramount. It is not so much, to her, that the unexamined life is not worth living. It is instead that the unexamined life exists in a state of mortal peril, separated only by circumstances from becoming one of the “good Germans” who did nothing as their neighbours were murdered.

This blog is my attempt to think and to judge. To take moral positions, so that I am in the habit of it.

It’s a vulnerable spot, to stake out a position. You must always live with the risk of being later proved wrong. Or, perhaps worse, having been proved wrong before you even set pen to paper (or pixels to screen).

In her essay On Violence, Hannah Arendt demolished the premises upon which I based my own essay on how states should use their monopoly on violence. It’s rare that I get to see my own work so completely rendered useless. I found the process both useful and humbling.

On Violence is divided into three sections. In the first, Arendt covers how violence has been used and thought about in the decade preceding her essay (it was published in 1969). In the second, she lays out new definitions and models for strength, violence, power, and authority and challenges the definitions use by the great thinkers of the past. In the final section, she re-examines the recent events of her time in light of her definitions and discusses the promise and danger of power and violence.

So, enter the end of the 1960s. The past decade has seen student sit-ins and protests at practically every university. It has seen the end of official segregation and the ongoing struggles of the civil rights movement. In Europe, a military coup toppled the French Fourth Republic and liberalization in Czechoslovakia led to an invasion by Soviet tanks. In Vietnam, America took up France’s failing war and found themselves unable to defeat a small cadre of revolutionaries.

Against this backdrop, Arendt remarks on the most dangerous fact of all: that through our artifice, we have attained the means (i.e. nuclear weapons) to destroy ourselves. There is, Arendt remarks, an age-old conflict between means and ends, in that means always threaten to overshadow the ends they seek to bring about.

Given that there is always an element of chance when it comes to attaining our ends, nuclear weapons mark the development of a new era, where means dominate ends because all means are so terrifying and all ends so uncertain. When you asked a youth in the 1960s where they hoped to be in the future, they would always preface an answer with “well, assuming I am still alive…”.

None of this was made more comforting by the many commonplace myths Arendt identified. Among the think tanks and the military industrial complex, she saw a tendency to transmute hypotheses into reality, to believe that possibilities identified using only reason (and no evidence) could become universal truths; the people in charge of the nuclear weapons did not believe their ends to be at all uncertain, despite all evidence to the contrary. Among the left, she noticed a glorification of violence that had no place in the texts of Marx (let alone in a movement supposedly built on freedom and compassion). The left, Arendt worried, was imbuing violence with all sorts of properties that it had never had, like ‘creativity’, or ‘the ability to heal’.

It is important to note that Arendt had no time for talk of violent revolutions. To her (as she claims, it was with Marx), “dreams never come true”; violence against an oppressor was just violence, not a transformative force capable of launching a new era. In this, she had the weight of recent bitter history on her side, as the communist revolutions were revealed to have brought about nothing but tyranny.

It is only after laying out this tortured landscape, full of pitfalls and dangers, that Arendt turned to the philosophy of violence, the main purpose of this essay.

The first part of this examination is an observation: philosophers and politicians, from the left to the right, have, for a long time, identified violence as a mere outgrowth or component of power. Arendt trots out a dizzying array of quotes, all as plausible as the Max Weber quote I opened with but coming from the likes of C. Wright Mills, Sartre, Sorrel, Jouvenel, Voltaire, von Clausewitz, Mao Zedong, John Stuart Mill, and Hobbes.

It is against all of these definitions, which equate power with violence (and especially coercive violence that propagates the will of whomever wields it) that Arendt stands. She instead seeks a positive power in the philosophy (seldom actually achieved) of the revolutions of the 1700s (and the earlier ideal of polis life, deeply flawed as it was in practice), which viewed government of “man over man” as no fit way to live. In this framework, she identifies power, as distinct from violence, with “the rules of the game”, the set of socially acceptable actions. If you step outside of these rules, power manifests as social consequences: entreaties to change, glares, angry words, and in the extreme case, shunning

This definition is not non-coercive. To social creates like us, social punishments are real punishments. They may not be violence, but they can still act to change our will; or even to shape what we can will.

What prevents the “rules of the game” from being a tyranny (albeit a tyranny with majority support) of another name is some sort of democracy, some ability for people broadly to gain power and push; the chance to have a hand in writing the rules we all must play by. To use the language of the great revolutions of the 1700s, this is “the consent of the governed”.

If you doubt the existence of power as Arendt defines it, I challenge you to go to some public place and violate its norms. Any sufficient violation of norms should see the public exercise their power on you and will probably force you to stop. It is intensely hard for us humans to go against the will of a group, especially if that group makes it displeasure known. And it rarely even needs to come to anything as overt as glares; power is invisible, until you sense its boundaries. It’s a rare person who can act, knowing that they will immediately face intense social censure for their actions. It’s recognizing this, when so few others have, that marks Arendt’s brilliance.

(Interestingly, if you were to complete this challenge, the norms that you violate would most likely be norms that you otherwise agree with. The rules of the game are supposed to exist to make us feel happy and satisfied, able to interact with each other without fear. Personhood is an interface that carries expectations in order to receive recognition.)

Power will always be less absolute than violence. You obey a criminal with a gun far more readily than you obey the law, because the criminal (or rather, the gun) has an immediacy that power does not possess. Therefore, a law without popular support can be enforced, but only at the barrel of the gun. The violence of the enforcement will overwhelm the power of the majority.

Note the use of majority here, because that word is important in Arendt’s conception; to her, power will always require a majority. From this and from the immediacy of violence, it follows that the only way a minority can enforce their will on a majority is via violence.

Once you conceive of power as “the simple rules of the game”, it is clear how much weaker the tyrant is than the body politic. Tyranny falls apart as soon its few enforcers refuse to wield the weapons necessary for its survival, because there is no back up, nothing else, that can maintain it. Power can survive the complete annihilation of the government, because the government is its mere outgrowth, not its heart.

That said, if we are concerned with the ability of tyrants to rule through violence, we should be fearful of the continual improvements we are making to the implements of violence. It is not, as you might think, simply that the implements have become more destructive. There is as much space between the knight and the peasant with a pitchfork as there is between the man with a rifle and the stealth bomber, which is to say that the tyrant has always outclassed the revolutionary.

The true danger is rather how modern implements of violence allow the tyrant to shrink their inner circle and yet still maintain their monopoly on violence. Automation has made violence more efficient, not yet to the pathological case where one man with a button and an army of robots can hold a whole nation in fear, but there is a sense we are fast approaching that terrifying state.

If tyranny shows how violence can unmake power, it is rebellions that show how power can overshadow violence. Rebellions are successful when the state has lost its grip on power, not when the rebels win on the battlefield. Armed rebellions are often made needless by the very fact of their existence, because rebels can only arm themselves when the gatekeepers of weapons decide they no longer wish to support the state. When the army refuses the demands of the strongman, the regime is already over. Armed rebellions succeed more because they erode the power of the state to the point where no one will back it than as a result of any decisive war of manoeuvres.

There is, of course, room for state violence outside of the extremes. Like in the case of tyranny, Arendt considers state violence to be the opposite of state power. It emerges only when power has failed (e.g. when power alone is not enough to keep a criminal “playing by the rules of the game”) or when power is breaking down (e.g. the police being called on to disperse protestors marching on the government). Because of this, Arendt believes that (democratic) states should not be defined by violence, which is only theirs in exigency.

The interaction between power and violence is a topic Arendt returns to over and over in this section. She also believes, that violence flips power on its head (“the extreme form of power is All against One, the extreme form of violence is One against All”) – and steadily erodes it. I’m not entirely sure what the mechanism is supposed to be here though; it could be that when everyone sees violence as the quickest way to their ends, the structures of power – the incentive to play by the rules of the game in order to change them – disappears. Or it could be that violence leads to violence in return, as everyone tries to protect themselves without being able to resort to power. Regardless, the outcome is the same.

Terror is the result of violence that destroys all power and then fails to abdicate. The Soviet government provides one of the clearest examples of terror. After it shattered society, it seeded it with informants. This meant that no one could seek out others to organize power, because there was always the fear that you might be conspiring with an informant. Russia, I think, is still grappling with this total destruction of all power. It is unclear to me if it is at all capable of returning to rule based on power, rather than (in some part, at least) violence.

Nonviolent resistance movements, like Gandhi’s, work only when the government is scared of the corrosive effects of violence. Sit-ins and salt marches would have been met with massacres if used against the Soviets or Nazis, but against a British government that feared the results of becoming reliant on violence, they were successful.

(The British were right to fear violence. After all, it was soldiers tasked with “pacifying” the colonies that launched the coup d’état that ended the French Fourth Republic. Arendt strongly believed that relying on violence abroad would erode power at home, probably as a result of this experience, not to mention the violence used to quell anti-war demonstrators in America.)

These ideas provide the conceptual framework for Arendt to re-examine what was then recent history and justify why the theorist still has a right to talk about these things.

Arendt pauses to explain that she feels the need to justify her right to speak on these subjects, because of what she claims is an ongoing tendency to explain human behaviour in terms of animal behaviour. Scientists, says Arendt, are increasingly expanding the scope of which behaviours should be considered “natural”, which is to say, the same as other animals would exhibit. Tied into this is a nascent and seldom spoken belief, that reason requires us to sever some of these vestiges of our animal nature.

Arendt disagrees strenuously with both the premise and the prescription. First, she believes that it is wrong to say that we are proved to be more and more like animals. Instead, it is more correct to say that animals are proved to be more and more like us. It is still us that has the singular faculty for reason, but it is certainly amusing and interesting to see all of the ways in which we are not as alone upon our pedestal as we once assumed.

(I think she makes this distinction because if we are like animals, then the study of human nature belongs to the biologist. But if animals are like us, then human nature is still the domain of the philosopher. It’s a subtle difference, but to her, a very important one.)

When it comes to removing human capacity – like for rage – Arendt sees nothing but dehumanization. Rage, she explains, can be rational. We rage when we suspect something could be done but it is not. Rage is turned not against the volcano, but against the heavens for failing to prevent it, or the government for failing to protect us.

(I have been known to view critiques of science like this, from non-scientists, with suspicion. I think Arendt gets a pass because it is clear that her disagreements with science aren’t based on a fear of science disproving one of her specific political positions. Arendt is good at this in general; in an appendix, she cautions against a scientific meritocracy without using any of the tired and silly arguments people normally resort to.)

Rage and violence can also be a rational reaction to hypocrisy (if reason is a trap, why step into it?), although Arendt is quick to point out that this can backfire in two ways (when seeking out hypocrisy becomes an end into itself, as during The Terror; when violence is used to provoke violence and therefore “reveal” a hypocrisy that never existed).

To be honest, I’m not sure many people are arguing that scientists should remove fundamental characteristics of people anymore. But it strikes me as the sort of thing people plausibly could have argued about in the past. And it seemed worth noting that Arendt sees a (limited) role for violence or anger in politics (although it is also worth noting that she views violence per se as outside of the political sphere, because it has nothing to do with power). And finally, I should mention that like practically everyone, she views violence in self-defence as justified.

But Arendt does find many justifications of violence to be foolish. She cautions against “natural” metaphors for power, those that associate it with outward growth and fecundity. Once you accept these, she believes, you also accept that violence has the power of renewal. Violence clears away the bounds on power and breathes new life into it by allowing it to expand again (imagine the analogy to forest fire, which clears away dead wood and lets a new forest grow). Given all of the follies and pains of empire, it is clear that even if this were true (and she is not convinced that it is), it is not recommended. Power, to Arendt, is perfectly content without expansion (and indeed, violent expansion, to her, always erodes power and replaces it with violence).

Nowhere does she find violence more dangerous then with respect to racism. On racist ideologies, she says:

Racism, as distinguished from race, is not a fact of life, but an ideology, and the deeds it leads to are not reflex actions, but deliberate acts based on pseudo-scientific theories. Violence in interracial struggle is always murderous, but it is not “irrational”; it is the logical and rational consequence of racism, by which I do not mean some rather vague prejudices on either side, but an explicit ideological system.

(To make it perfectly clear, she means “rational” here to read only as internal consistency, not external consistency.)

Luckily, power can overcome prejudices. The non-violent actions of the Civil Rights Movement are one of her best examples of the fruits of power, which broke apart segregation and ended (for a time) most restrictions at the ballot box.

That said, even here does Arendt see some role for limited political violence (I am using this to mean what it normally does, but should acknowledge Arendt would view this particular word combination as an oxymoron). She acknowledges that sometimes, it is only through the violence of the radical that the moderate is given a hearing. Unfortunately, beyond cautions that violence is useful only for short-term objectives and that it is indiscriminate in its ends (that is to say, it is a poor tool for systemic change, because it is as likely to gain token concessions as real change), Arendt offers no real framework with which to evaluate when violence might be justified.

Such a framework would be especially useful when evaluating violence against bureaucracy, a major theme of the last section. Arendt identifies bureaucracy as the force with which the student movements are fighting and claims that it is tempting to resort to violence when dealing with it because bureaucracy can leave you with no one to argue with and no avenue through which to gather and use power.

It is because of this that Arendt stands against the “progressive” goal of centralization and instead prefers federalism. This is interesting to me, because Arendt is normally identified as a leftist and her writing quotes Marx heavily. It is a testament to the contempt with which she holds bureaucracy (no doubt heavily influenced by her work analyzing the bureaucracy of the Nazis) that she views striking against it as more important than the progressive priorities that can be attained via centralization and bureaucracy.

Or perhaps it is just that Arendt’s leftist views are actually quite heterodox; there’s a certainly a way to read her that suggests hostility to the welfare state and a preference (perhaps for reasons grounded in a desire to promote virtue and human connection?) for communal charity on a more local scale as a replacement.

Arendt acknowledges that bureaucracy has made the “impossible possible” (e.g. the landings on the moon), but she believes that this has come at the cost of making daily tasks (like governing) impossible.

To this conundrum, she offers no answer. This, I think, is very characteristic of Arendt. It’s very easy to see what she opposes, but hard to find a model of government for which she advocates. I often find her criticism incredibly insightful, so this curious stopping short, her refusal to recommend any specific action, is often frustrating.

As it is, all I’m left with are fears. The trends she laid out – the dangers of our means overshadowing our ends and the ossification that comes with bureaucracy – have not gone away. If anything, they’ve intensified. And while this book gave me a new model of power and violence, I’m not quite sure what to do with it.

But then, Arendt would probably say there’s no point in trying to do something with it alone. Power can only come in groups. And her students are probably supposed to talk with others, to share our concerns, and to think about what we can do together, to keep the world running a little longer.

Model, Philosophy, Quick Fix

Post-modernism and Political Diversity

I was reading a post-modernist critique of capitalist realism – the resignation to capitalism as the only practical way to organize a society, arising out of the failure of the Soviet Union – and I was struck by something interesting about post-modernism.

Insofar as post-modernism stands for anything, it is a critique of ideology. Post-modernism holds that there is no privileged lens with which to view the world; that even empiricism is suspect, because it too has a tendency to reproduce and reify the power structures in which in exists.

A startling thing then, is the sterility of the post-modernist political landscape. It is difficult to imagine a post-modernist who did not vote for Bernie Sanders or Jill Stein. Post-modernism is solely a creature of the left and specifically that part of the left that rejects the centrist compromise beloved of the incrementalist or market left.

There is a fundamental conflict between post-modernism’s self-proclaimed positioning as an ideology without an ideology – the only ideology conscious of its own construction – and its lack of political diversity.

Most other ideologies are tolerant of political divergence. Empiricists are found in practically every political party (with the exception, normally, being those controlled by populists) because empiricism comes with few built in moral commitments and politics is as much about what should be as what is. Devout Catholics also find themselves split among political parties, as they balance the social justice and social order messages of their religion. You will even, I would bet, find more evangelicals in the Democratic party than you will find post-modernists in the Republican party (although perhaps this would just be an artifact of their relative population sizes).

Even neoliberals and economists, the favourite target of post-modernists, find their beliefs cash out to a variety of political positions, from anarcho-capitalism or left-libertarianism to main-street republicanism.

It is hard to square the narrowness of post-modernism’s political commitments with its anti-ideological intellectual commitments. Post-modernism positions itself in communion with the Real, that which “any [constructed, as through empiricism] ‘reality’ must suppress”. Yet the political commitments it makes require us to believe that the Real is in harmony with very few political positions.

If this were the actual position of post-modernism, then it would be vulnerable to a post-modernist critique. Why should a narrow group of relatively privileged academics in relatively privileged societies have a monopoly on the correct means of political organization? Certainly, if economics professors banded together to claim they had discovered the only means of political organization and the only allowable set of political beliefs, post-modernists would be ready with that spiel. Why then, should they be exempt?

If post-modernism instead does not believe it has found a deeper Real, then it must grapple with its narrow political attractions. Why should we view it as anything but a justification for a certain set of policy proposals, popular among its members but not necessarily elsewhere?

I believe there is value in understanding that knowledge is socially constructed, but I think post-modernism, by denying any underlying physical reality (in favour of a metaphysical Real) removes itself from any sort of feedback loop that could check its own impulses (contrast: empiricism). And so, things that are merely fashionable among its adherents become de facto part of its ideology. This is troubling, because the very virtue of post-modernism is supposed to be its ability to introspect and examine the construction of ideology.

This paucity of political diversity makes me inherently skeptical of any post-modernist identified Real. Absent significant political diversity within the ideological movement, it’s impossible to separate an intellectually constructed Real from a set of political beliefs popular among liberal college professors.

And “liberal college professors like it” just isn’t a real political argument.

Ethics, Model, Philosophy

Signing Up For Different Moralities

When it comes to day to day living, many people are in agreement on what is right and what is wrong. Giving change to people who ask for it, shoveling your elderly neighbour’s driveway, and turning off the lights when you’re not in the room: good. Killing, robbing, and drug trafficking: bad. Helping the police to convict mobsters who kill, steal, and traffic drugs: good.

While many moral debates can get complicated, this one rarely does. Even when helping the police involves turning on your compatriots – “snitching” – many people (although notably not the President of the United States of America) think the practice is a net good. There’s a recent case in Australia where opinion has been rather more split. Why? Well, the informant was a lawyer – specifically, a lawyer who had worked with the accused parties. Here’s a sampling of commenters on both sides:

In this case I feel it is for the greater good that human garbage like Mokbel are convicted even if the system has to be bent to do so. [1]
The job requires strict adherence to the ethical rules. If you let your dog run the house, the house gets torn apart.
The brave lady in question went above and beyond to keep Victorians safer. If these thugs are released or sentences reduced there will be uproar.
The right to an open and fair trial is a hallmark of a democratic country even if sometimes a defendant who is in fact guilty gets acquitted.
While I’m normally happy to see violent mobsters go to jail, here I must disagree with everyone who offered support for the lawyer. I think it was wrong of her to inform on her clients and correct for the high court to rebuke the police in the strongest possible terms. I certainly don’t want any of those mobsters back on the street and I hope there’s enough other evidence that none of them have to be released.

But even if some of them do end up winning their appeals, I believe we are better off in a society where lawyers cannot inform on their clients. This, I think, is one of the ethical cases where precedent utilitarianism is particularly useful in analysis and one that demonstrates its strengths as a moral philosophy.

(To briefly recap: precedent utilitarianism is the strain of utilitarian thought that emphasizes the moral weight of precedents. Precedent utilitarians don’t just consider the first-order effects of their actions on global wellbeing. They also consider what precedents their actions create and how those precedents can be later used for others for good or ill.)

The common law legal system is premised on the belief that the burden of proof of crime rests upon the state. If the state wishes to take away someone’s liberty, it must prove to a jury that the person committed the crime. The accused is supposed to be vigourously defended by an advocate – a lawyer or barrister – who has a legal and professional duty to defend their client to the best of their abilities.

We place the burden of proof on the government because we acknowledge that the government can be flawed. To give into every demand it makes leads to tyranny. Only by forcing it to justify all of its actions can we ensure freedom for anyone.

(This sounds very pretty when laid out like this. In practice, we are rather less good at holding the government to account than many, including myself, would like. Especially when the defendant isn’t white. I believe part of why society fails to live up to its duty to hold the government to account is sympathies that commonly lie with police and against defendants, the very sympathies I’m arguing against holding too strongly.)

But it’s not just upon the government that we place a burden to avoid pre-judging. We require advocates to defend their clients to the best of their abilities because we are skeptical of them as well. If we let attorneys decide who deserves defending, then we have just shifted the tyranny. Attorneys can make snap judgements that aren’t borne out by the facts. They can be racist. They can be sexist. They can make mistakes. It’s only by forcing them to defend everyone, regardless of perceived innocence or guilt, that we can truly make the state do its duty.

This doesn’t mean that lawyers always have to go to trial and defend their clients in front of a judge and a jury. It could be that the best thing for a client is a guilty plea (ideally if they are actually guilty, although that’s also not how things currently work, especially when the accused isn’t white). If a lawyer truly believes in a legal strategy (like a guilty plea) and the client refuses to listen, the attorney always can walk away and leave the trial defense to another lawyer. The important thing is that someone must defend the accused and that that someone will be ethically bound to give it their best damn shot.

Many people don’t like this. It is obviously best if every guilty person is punished in accordance with their crime. Some people trust the government to the point where they view every accused as essential guilty. To them, lawyers are scum who defend criminals and prevent them from being justly punished.

I view things differently. I view lawyers as people who have signed up for an alternative morality. While conventional morality holds that we should punish criminals, lawyers have signed up to defend all of their clients, even criminals, and to do their best to prevent that punishment. This is very different from the rest of us!

But it’s complimentary to my (our?) morality. It is not only best if we appropriately punish those who break the law. I believe is also best if we do it without punishing anyone who is innocent.

We cannot ask lawyers to talk to their clients, figure out if they’re innocent or guilty, and then inform the judge or dump as clients all of the truly guilty. This will only work for a short while. Then everyone will figure out that you have to lie to your attorney (or tell the truth if you’re innocent) if you want to avoid jail. We’re now stuck trusting the judgement of attorneys as to who is lying and who is telling the truth – judgement that could be tainted by any number of mistakes or prejudices.

In the Australian case, the attorney made a decision she wasn’t qualified to make. She, not a jury, decided her client was guilty. She doesn’t appear to be wrong here (although really, how can we tell, given that a lot of the information used in the convictions came from her and her erstwhile clients weren’t able to cross-examine her testimony) but if we don’t want a system where a random lawyer gets to decide who is guilty or not, the important thing isn’t that her testimony is true. The important thing is that she arrogated power that wasn’t hers and thereby undermined the justice system. If we let things like this stand, we enable tyranny.

The next lawyer might not be telling the truth. He may just be biased against black clients and want to feel like a hero. Or she might be locked in a payment dispute and angry with her client. We don’t know. And that should scare us away from allowing this precedent to stand. A harsh rebuke here means that the police will be unable to use any future testimony from lawyers and protects everyone in Australia from arbitrary imprisonment based on the decisions of their lawyer.

Focusing on the precedents that actions set is important. If you don’t and instead focus solely on each issue in isolation, you can miss the slow erosion of the rights and freedoms that we all rely on (or desire). Its suitability for this sort of analysis is what makes precedent utilitarianism so appealing to me. It urges us to dig deeper and try to understand why society is set up the way it is.

I think alternative moralities, actively different moral systems that people sign up for as part of their professions are an important model to hold for precedent utilitarians. Alternative moralities encode good precedents, even if they stand in opposition to commonly held values.

We don’t just see this among lawyers. CEOs sign up for the alternative morality of fiduciary duty, which requires them to put the interests of their investors above everything but the law. Complaints about the downsides of this ignore the fact we need companies to grow and profit if we ever want to retire [2]. Engineers sign up for an alternative, stricter morality, which holds them personally and professionally responsible for the failures of any device or structure they sign off on.

Having alternative moralities around makes public morality more complicated. It becomes harder to agree on what is right or wrong; it might be right for a lawyer to help a criminal in a way that it would be wrong for anyone else, or wrong for an engineer to make a mistake in a way that would carry no moral blame for anyone outside of the profession. These alternative moralities require us to do a deeper analysis before judging and reward us with a stronger, more resilient society when we do.

Footnotes

[1] Even though I disagree strenuously with this poster, I have a bit of fondness for their comment. My very first serious essay – and my interest in moral philosophy – was inspired by a similar comment. ^

[2] This isn’t just a capitalism thing. Retirement really just looks like delay some consumption now in order to be able to consume more in retirement. Consumption, time value of [goods and services, money], and growth follow the same math whether you have central planning or free markets. Communists have to figure out how to do retirement as well and they’re faced with the prospect of either providing less for retired people, or using tactics that would make American CEOs blush in order to drive the sort of growth necessary to support an aging retired population. ^
Model, Philosophy

Against Novelty Culture

So, there’s this thing that happens in certain intellectual communities, like (to give a totally random example) social psychology. This thing is that novel takes are rewarded. New insights are rewarded. Figuring out things that no one has before is rewarded. The high-status people in such a community are the ones who come up with and disseminate many new insights.

On the face of it, this is good! New insights are how we get penicillin and flight and Pad Thai burritos. But there’s one itty bitty little problem with building a culture around it.

Good (and correct!) new ideas are a finite resource.

This isn’t news. Back in 2005, John Ioannidis laid out the case for “most published research findings” being false. It turns out that when you have a small chance of coming up with a correct idea even using statistical tests for to find false positives can break down.

A quick example. There are approximately 25,000 genes in the human genome. Imagine you are searching for genes that increase the risk of schizophrenia (chosen for this example because it is a complex condition believed to be linked to many genes). If there are 100 genes involved in schizophrenia, the odds of any given gene chosen at random being involved are 1 in 250. You, the investigating scientist, decide that you want about an 80% chance of finding some genes that are linked (this is called study power and 80% is a common value) You run a bunch of tests, analyze a bunch of DNA, and think you have a candidate. This gene has been “proven” to be associated with schizophrenia at a p=0.05 confidence level.

(A p-value is the possibility of observing an event at least as extreme as the observed one, if the null hypothesis is true. This means that if the gene isn’t associated with schizophrenia, there is only a 1 in 20 chance – 5% – we’d see a result as extreme or more extreme than the one we observed.)

At the start, we had a 1 in 250 chance of finding a gene. Now that we have a gene, we think there’s a 19 in 20 chance that it’s actually partially responsible for schizophrenia (technically, if we looked at multiple candidates, we should do something slightly different here, but many scientists still don’t, making this still a valid example). Which probability to we trust?

There’s actually an equation to figure it out. It’s called Bayes Rule and statisticians and scientists use it to update probabilities in response to new information. It goes like this:

(You can sing this to the tune of Hallelujah; take P of A when given B / times P of A a priori / divide the whole thing by B’s expectation / new evidence you may soon find / but you will not be in a bind / for you can add it to your calculation.)

In plain language, it means that probability of something being true after an observation (P(A|B)) is equal to the probability of it being true absent any observations (P(A), 1 in 250 here), times the probability of the observation happening if it is true (P(B|A), 0.8 here), divided by the baseline probability of the observation (P(B), 1 in 20 here).

With these numbers from our example, we can see that the probability of a gene actually being associated with schizophrenia when it has a confidence level of 0.05 is… 6.4%.

I took this long detour to illustrate a very important point: one of the strongest determinants of how likely something is to actually be true is the base chance it has of being true. If we expected 1000 genes to be associated with schizophrenia, then the base chance would be 1 in 25, and the probability our gene actually plays a role would jump up to 64%.

To have ten times the chance of getting a study right, you can be 10 times more selective (which probably requires much more than ten times the effort)… or you can investigate something ten times as likely to actually occur. Base rates can be more powerful than statistics, more powerful than arguments, and more powerful than common sense.

This suggests that any community that bases status around producing novel insights will mostly become a community based around producing novel-seeming (but false!) insights once it exhausts all of the available true (and easily attainable) insights it could discover. There isn’t a harsh dividing line, just a gradual trend towards plausible nonsense as the underlying vein of truth is mined out, but the studies and blog posts continue.

Except the reality is probably even worse, because any competition for status in such a community (tenure, page views) will become an iterative process that rewards those best able to come up with plausible sounding wrappers on unfortunately false information.

When this happens, we have people publishing studies with terrible analyses but highly sharable titles (anyone remember the himmicanes paper?), with the people at the top calling anyone who questions their shoddy research “methodological terrorists“.

I know I have at least one friend who is rolling their eyes right now, because I always make fun of the reproducibility crisis in psychology.

But I’m just using that because it’s a convenient example. What I’m really worried about is the Effective Altruism community.

(Effective Altruism is a movement that attempts to maximize the good that charitable donations can do by encouraging donation to the charities that have the highest positive impact per dollar spent. One list of highly effective charities can be found on GiveWell; Givewell has demonstrated a noted trend away from novelty such that I believe this post does not apply to them.)

We are a group of people with countless forums and blogs, as well as several organizations devoted to analyzing the evidence around charity effectiveness. We have conventional organizations, like GiveWell, coexisting with less conventional alternatives, like Wild-Animal Suffering Research.

All of these organizations need to justify their existence somehow. All of these blogs need to get shares and upvotes from someone.

If you believe (like I do) that the number of good charity recommendations might be quite small, then it follows that a large intellectual ecosystem will quickly exhaust these possibilities and begin finding plausible sounding alternatives.

I find it hard to believe that this isn’t already happening. We have people claiming that giving your friends cash or buying pizza for community events is the most effective charity. We have discussions of whether there is suffering in the fundamental particles of physics.

Effective Altruism is as much a philosophy movement as an empirical one. It isn’t always the case that we’ll be using P-values and statistics in our assessment. Sometimes, arguments are purely moral (like arguments about how much weight we should give to insect suffering). But both types of arguments can eventually drift into plausible sounding nonsense if we exhaust all of the real content.

There is no reason to expect that we should be able to tell when this happens. Certainly, experimental psychology wasn’t able to until several years after much-hyped studies more-or-less stopped replicating, despite a population that many people would have previously described as full of serious-minded empiricists. Many psychology researchers still won’t admit that much of the past work needs to be revisited and potentially binned.

This is a problem of incentives, but I don’t know how to make the incentives any better. As a blogger (albeit one who largely summarizes and connects ideas first broached by others), I can tell you that many of the people who blog do it because they can’t not write. There’s always going to be people competing to get their ideas heard and the people who most consistently provide satisfying insights will most often end up with more views.

Therefore, I suggest caution. We do not know how many true insights we should expect, so we cannot tell how likely to be true anything that feels insightful actually is. Against this, the best defense is highly developed scepticism. Always remember to ask for implications of new insights and to determine what information would falsify them. Always assume new insights have a low chance of being true. Notice when there seems to be a pressure to produce novel insights long after the low hanging fruit is gone and be wary of anyone in tat ecosystem.

We might not be able to change novelty culture, but we can do our best to guard against it.

[Special thanks to Cody Wild for coming up with most of the lyrics to Bayesian Hallelujah.]

Ethics, Philosophy, Quick Fix

Second Order Effects of Unjust Policies

In some parts of the Brazilian Amazon, indigenous groups still practice infanticide. Children are killed for being disabled, for being twins, or for being born to single mothers. This is undoubtedly a piece of cultural technology that existed to optimize resource distribution under harsh conditions.

Infanticide can be legally practiced because these tribes aren’t bound by Brazilian law. Under legislation, indigenous tribes are bound by the laws in proportion to how much they interact with the state. Remote Amazonian groups have a waiver from all Brazilian laws.

Reformers, led mostly by disabled indigenous people who’ve escaped infanticide and evangelicals, are trying to change this. They are pushing for a law that will outlaw infanticide, register pregnancies and birth outcomes, and punish people who don’t report infanticide.

Now I know that I have in the past written about using the outside view in cases like these. Historically, outsiders deciding they know what is best for indigenous people has not ended particularly well. In general, this argues for avoiding meddling in cases like this. Despite that, if I lived in Brazil, I would support this law.

When thinking about public policies, it’s important to think about the precedents they set. Opposing a policy like this, even when you have very good reasons, sends a message to the vast majority of the population, a population that views infanticide as wrong (and not just wrong, but a special evil). It says: “we don’t care about what is right or wrong, we’re moral relativists who think anything goes if it’s someone’s culture.”

There are several things to unpack here. First, there are the direct effects on the credibility of the people defending infanticide. When you’re advocating for something that most people view as clearly wrong, something so beyond the pale that you have no realistic chance of ever convincing anyone, you’re going to see some resistance to the next issue you take up, even if it isn’t beyond the pale. If the same academics defending infanticide turn around and try and convince people to accept human rights for trans people, they’ll find themselves with limited credibility.

Critically, this doesn’t happen with a cause where it’s actually possible to convince people that you are standing up for what is right. Gay rights campaigners haven’t been cut out of the general cultural conversation. On the contrary, they’ve been able to parlay some of their success and credibility from being ahead of the curve to help in related issues, like trans rights.

There’s no (non-apocalyptic) future where the people of Brazil eventually wake up okay with infanticide and laud the campaigners who stood up for it. But the people of Brazil are likely to wake up in the near future and decide they can’t ever trust the morals of academics who advocated for infanticide.

Second, it’s worth thinking about how people’s experience of justice colours their view of the government. When the government permits what is (to many) a great evil, people lose faith in the government’s ability to be just. This inhibits the government’s traditional role as solver of collective action problems.

We can actually see this manifest several ways in current North American politics, on both the right and the left.

On the left, there are many people who are justifiably mistrustful of the government, because of its historical or ongoing discrimination against them or people who look like them. This is why the government can credibly lock up white granola-crowd parents for failing to treat their children with medically approved medicines, but can’t when the parents are indigenous. It’s also why many people of colour don’t feel comfortable going to the police when they see or experience violence.

In both cases, historical injustices hamstring the government’s ability to achieve outcomes that it might otherwise be able to achieve if it had more credibly delivered justice in the past.

On the right, I suspect that some amount of skepticism of government comes from legalized abortion. The right is notoriously mistrustful of the government and I wonder if this is because it cannot believe that a government that permits abortion can do anything good. Here this hurts the government’s ability to pursue the sort of redistributive policies that would help the worst off.

In the case of abortion, the very real and pressing need for some women to access it is enough for me to view it as net positive, despite its negative effect on some people’s ability to trust the government to solve coordination problems.

Discrimination causes harms on its own and isn’t even justified on its own “merits”. It’s effect on peoples’ perceptions of justice are just another reason it should be fought against.

In the case of Brazil, we’re faced with an act that is negative (infanticide) with several plausible alternatives (e.g. adoption) that allow the cultural purpose to be served without undermining justice. While the historical record of these types of interventions in indigenous cultures should give us pause, this is counterbalanced by the real harms justice faces as long as infanticide is allowed to continue. Given this, I think the correct and utilitarian thing to do is to support the reformers’ effort to outlaw infanticide.

Biology, Ethics, Literature, Philosophy

Book Review: The Righteous Mind

I – Summary

The Righteous Mind follows an argument structure I learned in high school debate club. It tells you what it’s going to tell you, it tells you it, then it reminds you what it told you. This made it a really easy read and a welcome break from The Origins of Totalitarianism, the other book I’ve been reading. Practically the very first part of The Righteous Mind proper (after the foreword) is an introduction to its first metaphor.

Imagine an elephant and a rider. They have travelled together since their birth and move as one. The elephant doesn’t say much (it’s an elephant), but the rider is very vocal – for example, she’s quick to apologize and explain away any damage the elephant might do. A casual observer might think the rider is in charge, because she is so much cleverer and more talkative, but that casual observer would be wrong. The rider is the press secretary for the elephant. She explains its action, but it is much bigger and stronger than her. It’s the one who is ultimately calling the shots. Sometimes she might convince it one way or the other, but in general, she’s buffeted along by it, stuck riding wherever it goes.

She wouldn’t agree with that last part though. She doesn’t want to admit that she’s not in charge, so she hides the fact that she’s mainly a press secretary even from herself. As soon as the elephant begins to move, she is already inventing a reason why it was her idea all along.

This is how Haidt views human cognition and decision making. In common terms, the elephant is our unconscious mind and the rider our consciousness. In Kahneman’s terms, the elephant is our System 1 and the rider our System 2. We may make some decisions consciously, but many of them are made below the level of our thinking.

Haidt illustrates this with an amusing anecdote. His wife asks him why he didn’t finish some dishes he’d been doing and he immediately weaves a story of their crying baby and barking incontinent dog preventing him. Only because he had his book draft open on his computer did he realize that these were lies… or rather, a creative and overly flattering version of the truth.

The baby did indeed cry and the dog indeed bark, but neither of these prevented him from doing the dishes. The cacophany happened well before that. He’d been distracted by something else, something less sympathetic. But his rider, his “internal press secretary” immediately came up with an excuse and told it, without any conscious input or intent to deceive.

We all tell these sorts of flattering lies reflexively. They take the form of slight, harmless embellishments to make our stories more flattering or interesting, or our apologies more sympathetic.

The key insight here isn’t that we’re all compulsive liars. It’s that the “I” that we like to think exists to run our life doesn’t, really. Sometimes we make decisions, especially ones the elephant doesn’t think it can handle (high stakes apologies anyone?), but normally decisions happen before we even think about them. From the perspective of Haidt, “I”, is really “we”, the elephant and its rider. And we need to be careful to give the elephant its due, even though it’s quiet.

Haidt devotes a lot of pages to an impassioned criticism of moral rationalism, the belief that morality is best understood and attained by thinking very hard about it. He explicitly mentions that to make this more engaging, he wraps it up in his own story of entering the field of moral psychology.

He starts his journey with Kohlberg, who published a famous account of the stages of moral reasoning, stages that culminate in rationally building a model of justice. This paradigm took the world of moral psychology by storm and reinforced the view (dating in Western civilization to the times of the Greeks) that right thought had to proceed right action.

Haidt was initially enamoured with Kohlberg’s taxonomy. But reading ethnographies and doing research in other countries began to make him suspect things weren’t as simple as Kohlberg thought. Haidt and others found that moral intuitions and responses to dilemmas differed by country. In particular, WEIRD people (people from countries that were Western, Educated, Industrialized, Rich, and Developed and most especially the most educated people in those countries) were very much able to tamp down feelings of disgust in moral problems, in a way that seemed far from universal.

For example, if asked if it was wrong for a family to eat their dog if it was killed by a car (and the alternative was burying it), students would say something along the lines of “well, I wouldn’t, but it’s gross, not wrong”. Participants recruited at a nearby McDonalds gave a rather different answer: “of course it’s wrong, why are you even asking”. WEIRD students at prestigious universities may have been working towards a rational, justice-focused explanation for morality, but Haidt found no evidence that this process (or even a focus on “justice”) was as universal as Kohlberg claimed.

That’s not to say that WEIRD students had no disgust response. In fact, trying to activate it gave even more interesting results. When asked to justify answers where disgust overpowered students sense of “well as long as no one was hurt” (e.g. consensual adult sibling incest with no chance of children), Haidt observed that people would throw up a variety of weak excuses, often before they had a chance to think the problem through. When confronted by the weakness of their arguments, they’d go speechless.

This made Haidt suspect that two entirely separate processes were going on. There was a fast one for deciding and a slower another for explanation. Furthermore, the slower process was often left holding the bag for the faster one. Intuitions would provide an answer, then the subject would have to explain it, no matter how logically indefensible it was.

Haidt began to believe that Kohlberg had only keyed in on the second, slower process, “the talking of the rider” in metaphor-speak. From this point of view, Kohlberg wasn’t measuring moral sophistication. He was instead measuring how fluidly people could explain their often less than logical moral intuitions.

There were two final nails in the coffin of ethical rationalism for Haidt. First, he learned of a type of brain injury that separated people from their moral intuitions (or as the rationalists might call them “passions”). Contrary to the rationalist expectation, these people’s lives went to hell, as they alienated everyone they knew, got fired from their jobs, and in general proved the unsuitability of pure reason for making many types of decisions. This is obviously the opposite of what rationalists predicted would happen.

Second, he saw research that suggested that in practical measures (like missing library books), moral philosophers were no more moral than other philosophy professors.

Abandoning rationalism brought Haidt to a sentimentalist approach to ethics. In this view, ethics stemmed from feelings about how the world ought to be. These feelings are innate, but not immutable. Haidt describes people as “prewired”, not “hardwired”. You might be “prewired” to have a strong loyalty foundation, but a series of betrayals and let downs early in life might convince you that loyalty is just a lie, told to control idealists.

Haidt also believes that our elephants are uniquely susceptible to being convinced by other people in face to face discussion. He views the mechanism here as empathy at least as much as logic. People that we trust and respect can point out our weak arguments, with our respect for them and positive feelings towards them being the main motive force for us listening to these criticisms. The metaphor with elephants kind of breaks down here, but this does seem to better describe the world as it is, so I’ll allow it.

Because of this, Haidt would admit that rationalism does have some purpose in moral reasoning, but he thinks it is ancillary and mainly used to convince other people. I’m not sure how testable making evolutionary conclusions about this is, but it does seem plausible for there to be selection pressure to make us really good at explaining ourselves and convincing others of our point of view.

As Haidt took this into account and began to survey peoples’ moral instincts, he saw that the ways in which responses differed by country and class were actually highly repeatable and seemed to gesture at underlying categories of people. After analyzing many, many survey responses, he and his collaborators came up with five (later six) moral “modules” that people have. Each moral module looks for violations of a specific class of ethical rules.

Haidt likens these modules to our taste-buds. The six moral tastes are the central metaphor of the second section of the book.

Not everyone has these taste-buds/modules in equal proportion. Looking at commonalities among respondents, Haidt found that the WEIRDer someone was, the less likely they were to have certain modules. Conservatives tended to have all modules in a fairly equal proportion, liberals tended to be lacking three. Libertarians were lacking a whopping four, which might explain why everyone tends to believe they’re the worst.

The six moral foundations are:

Care/Harm

This is the moral foundation that makes us care about suffering and pain in others. Haidt speculates that it originally evolved in order to ensure that children (which are an enormous investment of resources for mammals and doubly so for us) got properly cared for. It was originally triggered only by the suffering or distress of our own children, but can now be triggered by anyone being hurt, as well as cute cat videos or baby seals.

An expanding set of triggers seems to be a common theme for these. I’ve personally speculated that this would perhaps be observed if the brain was wired for minimizing negative predictive error (i.e. not mistaking a scene in which there is a lion for a scene without a lion), rather than positive predictive error (i.e. not mistaking a scene without a lion for a scene with a lion). If you minimize positive predictive error, you’ll never be frightened by a shadow, but you might get eaten by a lion.

Fairness/Cheating

This is the moral foundation that makes us want everyone to do their fair share and makes us want to punish tax evaders or welfare cheats (depending on our political orientation). The evolutionary story given for this one is that it evolved to allow us to reap the benefits of two-way partnerships; it was an incentive against defecting.

Loyalty/Betrayal

This is the foundation that makes us rally around our politicians, community leaders, and sports teams, as well as the foundation that makes some people care more about people from their country than people in general. Haidt’s evolutionary explanation for this one is that it was supposed to ensure coherent groups.

Authority/Subversion

This is the moral foundation that makes people obey their boss without talking back or avoid calling their parents by the first names. It supposedly evolved to allow us to forge beneficial relationships within hierarchies. Basically, it may have once been very useful to have people believe and obey their elders without question, (like e.g. when the elders say “don’t drink that water, it’s poisoned” no one does and this story can be passed down and keep people safe, without someone having to die every few years to prove that the water is indeed poisoned).

Sanctity/Degradation

This is the moral foundation that makes people on the right leery of pre-marital sex and people on the left leery of “chemicals”. It shows up whenever we view our bodies as more than just our bodies and the world as more than just a collection of things, as well as whenever we feel that something makes us “spiritually” dirty.

The very plausible explanation for this one is that it evolved in response to the omnivore’s dilemma: how do we balance the desire for novel food sources with the risk they might poison us? We do it by avoiding anything that looks diseased or rotted. This became a moral foundation as we slowly began applying it to stuff beyond food – like other people. Historically, the sanctity moral framework was probably responsible for the despised status of lepers.

Liberty/Oppression

This moral foundation is always in tension with Authority/Subversion. It’s the foundation that makes us want to band together against and cast down anyone who is aggrandizing themselves or using their power to mistreat another.

Haidt suggests that this evolved to allow us to band together against “alpha males” and check their power. In his original surveys, it was part of Fairness/Cheating, but he found that separating it gave him much more resolving power between liberals and conservatives.

Of these six foundations, Haidt found that libertarians only had an appreciable amount of Liberty/Oppression and Fairness/Cheating and of these two, Liberty/Oppression was by far the stronger. While the other foundations did exist, they were mostly inactive and only showed up under extreme duress. For liberals, he found that they had Care/Harm, Liberty/Oppression, and Fairness/Cheating (in that order).

Conservatives in Haidt’s survey had all six moral foundations, like I said above. Care/Harm was their strongest foundation, but by having appreciable amounts of Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation, they would occasionally overrule Care/Harm in favour of one or another of these foundations.

Haidt uses these moral foundations to give an account of the “improbable” coalition between libertarians and social conservatives that closely matches the best ones to come out of political science. Basically, liberals and libertarians are descended (ideologically, if not filially) from those who embraced the enlightenment and the liberty it brought. About a hundred years ago (depending on the chronology and the country), the descendants of the enlightenment had a great schism, with some continuing to view the government as the most important threat to liberty (libertarians) and others viewing corporations as the more pressing threat (liberals). Liberals took over many auspices of the government and have been trying to use it to guarantee their version of liberty (with mixed results and many reversals) ever since.

Conservatives do not support this project of remaking society from the top down via the government. They believe that liberals want to change too many things, too quickly. Conservatives aren’t opposed to the government qua government. In fact, they’d be very congenial to a government that shared their values. But they are very hostile to a liberal, activist government (which is rightly or wrongly how conservatives view the governments of most western nations) and so team up with libertarians in the hopes of dismantling it.

This section, which characterized certain political views as stemming from “deficiencies” in certain “moral modules –, in a way that is probably hereditary – made me pause and wonder if this is a dangerous book. I’m reminded of Hannah Arendt talking about “tolerance” for Jews committing treason in The Origins of Totalitarianism.

It is an attraction to murder and treason which hides behind such perverted tolerance, for in a moment it can switch to a decision to liquidate not only all actual criminals but all who are “racially” predestined to commit certain crimes. Such changes take place whenever the legal and political machine is not separated from society so that social standards can penetrate into it and become political and legal rules. The seeming broad-mindedness that equates crime and vice, if allowed to establish its own code of law, will invariably prove more cruel and inhuman than laws, no matter how severe, which respect and recognize man’s independent responsibility for his behavior.

That said, it is possible for inconvenient or dangerous things to be true and their inconvenience or danger has no bearing on their truth. If Haidt saw his writings being used to justify or promote violence, he’d have a moral responsibility to decry the perpetrators. Accepting that sort of moral responsibility is, I believe, part of the responsibility that scientists who deal with sensitive topics must accept. I do not believe that this responsibility precludes publishing. I firmly believe that only right information can lead to right action, so I am on the whole grateful for Haidt’s taxonomy.

The similarities between liberals and libertarians extend beyond ethics. Both have more openness to experience and less of a threat response than conservatives. This explains why socially, liberals and libertarians have much more in common than liberals and conservatives.

Moral foundation theory gave me a vocabulary for some of the political writing I was doing last year. After the Conservative (Party of Canada) Leadership Convention, I talked about social conservative legislation as a way to help bind people to collective morality. I also talked about how holding other values very strongly and your values not at all can make people look diametrically opposed to you.

The third and final section of The Righteous Mind further focuses on political tribes. Its central metaphor is that humans are “90% chimp, 10% bee”. It’s central purpose is an attempt to show how humans might have been subject to group selection and how our groupishness is important to our morality.

Haidt claims that group selection is heresy in evolutionary biology (beyond hive insects). I don’t have the evolutionary biology background to say if this is true or not, although this does match how I’ve seen it talked about online among scientifically literate authors, so I’m inclined to believe him.

Haidt walks through the arguments against group selection and shows how they are largely sensible. It is indeed ridiculous to believe that genes for altruism could be preserved in most cases. Imagine a gene that would make deer more likely to sacrifice itself for the good of the herd if it seemed that was the only way to protect the herd’s young. This gene might help more deer in the herd attain adulthood, but it would also lead to any deer who had it having fewer children. There’s certainly an advantage to the herd if some members have this gene, but there’s no advantage to the carriers and a lot of advantage to every deer in the herd who doesn’t carry it. Free-riders will outcompete sacrificers and the selfless gene will get culled from the herd.

But humans aren’t deer. We can be selfish, yes, but we often aren’t and the ways we aren’t can’t be simply explained by greedy reciprocal altruism. If you’ve ever taken some time out of your day to help a lost tourist, congratulations, you’ve been altruistic without expecting anything in return. That people regularly do take time out of their days to help lost tourists suggests there might be something going on beyond reciprocal altruism.

Humans, unlike deer, have the resources and ability to punish free riders. We expect everyone to pitch in and might exile anyone who doesn’t. When humans began to form larger and larger societies, it makes sense that the societies who could better coordinate selfless behaviour would do better than those that couldn’t. And this isn’t just in terms of military cohesion (as the evolutionary biologist Lesley Newson had to point out to Haidt). A whole bunch of little selfless acts ­– sharing food, babysitting, teaching – can make a society more efficient than its neighbours at “turning resources into offspring”.

A human within the framework of society is much more capable than a human outside of it. I am only able to write this and share it widely because a whole bunch of people did the grunt work of making the laptop I’m typing it on, growing the food I eat, maintaining our communication lines, etc. If I was stuck with only my own resources, I’d be carving this into the sand (or more likely, already eaten by wolves).

Therefore, it isn’t unreasonable to expect that the more successful and interdependent a society could become, the more it would be able to outcompete, whether directly or indirectly its nearby rivals and so increase the proportion of its conditionally selfless genes in the human gene pool.

Conditional selflessness is a better description of the sorts of altruism we see in humans. It’s not purely reciprocal as Dawkins might claim, but it isn’t boundless either. It’s mostly reserved for people we view as similar to us. This doesn’t need to mean racially or religiously. In my experience, a bond as simple as doing the same sport is enough to get people to readily volunteer their time for projects like digging out and repairing a cracked foundation.

The switch from selfishness to selflessly helping out our teams is called “the hive switch” by Haidt. He devotes a lot of time to exploring how we can flip it and the benefits of flipping it. I agree with him that many of the happiest and most profound moments of anyone’s life come when the switch has been activated and they’re working as part of a team.

The last few chapters are an exploration of how individualism can undermine the hive switch and several mistakes liberals make in their zeal to overturn all hierarchies. Haidt believes that societies have both social capital (the bounds of trust between people) and moral capital (the society’s ability to bind people to collective values) and worries that liberal individualism can undermine these to the point where people will be overall worse off. I’ll talk more about moral capital later in the review.

II – On Shaky Foundations

Anyone who reads The Righteous Mind might quickly realize that I left a lot of the book out of my review. There was a whole bunch of supporting evidence about how liberals and conservatives “really are” or how they differ that I have deliberately omitted.

You may have heard that psychology is currently in the midst of a “replication crisis“. Much (I’d crudely estimate somewhere between 25% and 50%) of the supporting evidence in this book has been a victim of this crisis.

Here’s what the summary of Chapter 3 looks like with the offending evidence removed:

Pictured: Page 82 of my edition of The Righteous Mind, after some “minor” corrections. Text is © 2012 Jonathon Haidt. Used here for purposes of commentary and criticism.

 

Here’s an incomplete list of claims that didn’t replicate:

  • IAT tests show that we can have unconscious prejudices that affect how we make social and political judgements (1, 2, 3 critiques/failed replications). Used to buttress the elephant/rider theory of moral decisions.
  • Disgusting smells can make us more judgemental (failed replication source). Used as evidence that moral reasoning can be explained sometimes by external factors, is much less rational than we’d like to believe.
  • Babies prefer a nice puppet over a mean one, even when pre-verbal and probably lacking the context to understand what is going on (failed replication source). Used as further proof for how we are “prewired” for certain moral instincts.
  • People from Asian societies are better able to do relative geometry and less able to absolute geometry than westerners (failed replication source). This was used to make the individualistic morality of westerners seem inherent.
  • The “Lady Macbeth Effect” showed a strong relationship between physical and moral feelings of “cleanliness” (failed replication source). Used to further strengthen the elephant/rider analogy.

The proper attitude with which to view psychology studies these days is extreme scepticism. There are a series of bad incentives (it’s harder and less prestigious to publish negative findings, publishing is necessary to advance in your career) that have led to scientists in psychology (and other fields) to inadvertently and advertently publish false results. In any field in which you expect true discoveries to be rare (and I think “interesting and counter-intuitive things about the human brain fits that bill), you shouldn’t allow any individual study to influence you very much. For a full breakdown of how this can happen even when scientists check for statistical significance, I recommend reading “Why Most Published Research Findings Are False” (Ioannidis 2005).

Moral foundations theory appears to have escaped the replication crisis mostly unscathed, (as have Tverskey and Kahneman’s work on heuristics, something that made me more comfortable including the elephant/rider analogy). I think this is because moral foundations theory is primarily a descriptive theory. It grew out of a large volume of survey responses and represents clusters in those responses. It makes little in the way of concrete predictions about the world. It’s possible to quibble with the way Haidt and his collaborators drew the category boundaries. But given the sheer volume of responses they received – and the fact that they based their results not just on WEIRD individuals – it’s hard to disbelieve that they haven’t come up with a reasonable clustering of the possibility space of human values.

I will say that stripped of much of its ancillary evidence, Haidt’s attack on rationalism lost a lot of its lustre. It’s one thing to believe morality is mostly unconscious when you think that washing your hands or smelling trash can change how moral you act. It’s quite another when you know those studies were fatally flawed. The replication crisis fueled my inability to truly believe Haidt’s critique of rationality. This disbelief in turn became one of the two driving forces in my reaction to this book.

Haidt’s moral relativism around patriarchal cultures was the other.

III – Less and Less WEIRD

It’s good that Haidt looked at a variety of cultures. This is a thing few psychologists do. There’s historically been an alarming tendency to run studies on western undergraduate students, then declare “this is how people are”. This would be fine if western undergraduates were representative of people more generally, but I think that assumption was on shaky foundations even before moral foundation theory showed that morally, at least, it was entirely false.

Haidt even did some of this field work himself. He visited South America and India to run studies. In fact, he mentioned that this field work was one of the key things that made him question the validity of western individualistic morality and wary of morality that didn’t include the sanctity, loyalty, and authority foundations.

His willingness to get outside of his bubble and to learn from others is laudable.

But.

There is one key way in which Haidt never left his bubble, a way which makes me inherently suspicious of all of his defences of the sanctity, authority, and loyalty moral foundations. Here’s him recounting his trip to India. Can you spot the fatal omission?

I was told to be stricter with my servants, and to stop thanking them for serving me. I watched people bathe in and cook with visibly polluted water that was held to be sacred. In short, I was immersed in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine.

It only took a few weeks for my dissonance to disappear, not because I was a natural anthropologist but because the normal human capacity for empathy kicked in. I liked these people who were hosting me, helping me, and teaching me. Wherever I went, people were kind to me. And when you’re grateful to people, it’s easier to adopt their perspective. My elephant leaned toward them, which made my rider search for moral arguments in their defense. Rather than automatically rejecting the men as sexist oppressors and pitying the women, children, and servants as helpless victims, I began to see a moral world in which families, not individuals, are the basic unit of society, and the members of each extended family (including its servants) are intensely interdependent. In this world, equality and personal autonomy were not sacred values. Honoring elders, gods, and guests, protecting subordinates, and fulfilling one’s role-based duties were more important.

Haidt tried out other moral systems, sure, but he tried them out from the top. Lois McMaster Bujold once had a character quip: “egalitarians adjust to aristocracies just fine, as long as they get to be the aristocrats”. I would suggest that liberals likewise find the authority framework all fine and dandy, as long as they have the authority.

Would Haidt have been able to find anything worth salvaging in the authority framework if he’d instead been a female researcher, who found herself ignored, denigrated, and sexually harassed on her research trip abroad?

It’s frustrating when Haidt is lecturing liberals on their “deficient” moral framework while simultaneously failing to grapple with the fact that he is remarkably privileged. “Can’t you see how this other society knows some moral truths [like men holding authority over woman] that we’ve lost” is much less convincing when the author of the sentence stands to lose absolutely nothing in the bargain. It’s easy to lecture others on the hard sacrifices society “must” make – and far harder to look for sacrifices that will mainly affect you personally.

It is in this regard that I found myself wondering if this might have been a more interesting book if it had been written by a woman. If the hypothetical female author were to defend the authority framework, she’d actually have to defend it, instead of hand-waving the defence with a request that we respect and understand all ethical frameworks. And if this hypothetical author found it indefensible, we would have been treated to an exploration of what to do if one of our fundamental ethical frameworks was flawed and had to be discarded. That would be an interesting conversation!

Not only that, but perhaps a female author would have given more pages to the observation that woman and children’s role in societal altruism was just as important as that of men (as child-rearing is a more reliable way to demonstrate and cash-in on groupishness than battle) have been fully explored, instead of relegated to a brief note at the end of the chapter on group selection. This perspective is genuinely new to me and I wanted to see it developed further.

Ultimately, Haidt’s defences of Authority/Subversion, Loyalty/Betrayal, and Sanctity/Degradation fell flat in the face of my Care/Harm and Liberty/Oppression focused moral compass. Scott Alexander once wrote about the need for “a solution to the time-limitedness of enlightenment that works from within the temporal perspective”. By the same token, I think Haidt fails to deliver a defence of conservatism or anything it stands for that works from within the liberal Care/Harm perspective. Insofar as his book was meant to bridge inferential gaps and political divides, this makes it a failure.

That’s a shame, because arguments that bridge this divide do exist. I’ve read some of them.

IV – What if Liberals are Wrong?

There is a principle called “Chesterton’s Fence”, which comes from the famed Catholic conservative and author G.K. Chesterton. It goes like this: if you see a fence blocking the road and cannot see the reason for it to be there, should you remove it? Chesterton said “no!”, resoundingly. He suggested you should first understand the purpose of the fence. Only then may you safely remove it.

There is a strain of careful conservatism that holds Chesterton’s fence as its dearest parable. Haidt makes brief mention of this strain of thought, but doesn’t expound on it successfully. I think it is this thought and this thought only that can offer Care/Harm focused liberals like myself a window into the redeeming features of the conservative moral frameworks.

Here’s what the argument looks like:

Many years ago, western nations had a unified moral framework. This framework supported people towards making long term decisions and acting in a pro-social manner. There are many people who want to act differently than they would if left to their own devices and this framework helped them to do that.

Liberals began to dismantle this system in the sixties. They saw hierarchies and people being unable to do the things they wanted to do, so tried to take down the whole edifice without first checking if any of it was doing anything important.

This strand of conservatism would argue that it was. They point to the increasing number of children born to parents who aren’t married (although increasingly these parents aren’t teens, which is pretty great), increasing crime (although this has started to fall after we took lead out of gasoline), increasing atomisation, decreasing church attendance, and increasing rates of anxiety and depression (although it is unclear how much of this is just people feeling more comfortable getting treatment).

Here’s the thing. All of these trends affect well educated and well-off liberals the least. We’re safe from crime in good neighbourhoods. We overwhelming wait until stable partnerships to have children. We can afford therapists and pills to help us with any mental health issues we might have; rehab to help us kick any drug habits we pick up.

Throwing off the old moral matrix has been an unalloyed good for privilege white liberals. We get to have our cake and eat it too – we have fun, take risks, but know that we have a safety net waiting to catch us should we fall.

The conservative appeal to tradition points out that our good time might be at the expense of the poor. It asks us if our hedonistic pleasures are worth a complete breakdown in stability for people with fewer advantages that us. It asks us consider sacrificing some of these pleasures so that they might be better off. I know many liberals who might find the sacrifice of some of their freedom to be a moral necessity, if framed this way.

But even here, social conservatism has the seeds of its own undoing. I can agree that children do best when brought up by loving and committed parents who give them a lot of stability (moving around in childhood is inarguably bad for many kids). Given this, the social conservative opposition to gay marriage (despite all evidence that it doesn’t mess kids up) is baffling. The sensible positon would have been “how can we use this to make marriage cool again“, not “how long can we delay this”.

This is a running pattern with social conservatism. It conserves blindly, without giving thought to what is even worth preserving. If liberals have some things wrong, that doesn’t automatically mean that the opposite is correct. It’s disturbingly easy for people on both sides of an issue to be wrong.

I’m sure Haidt would point out that this is why we have the other frameworks. But because of who I am, I’m personally much more inclined to do things in the other direction – throw out most of the past, then re-implement whatever we find to be useful but now lacking.

V – What if Liberals Listened?

In Berkeley, California, its environs, and assorted corners of the Internet, there exists a community that calls themselves “Rationalists”. This moniker is despite the fact that they agree with Haidt as to the futility of rationalism. Epistemically, they tend to be empiricists. Ethically, non-cognitivist utilitarians. Because they are largely Americans, they tend to be politically disengaged, but if you held them at gunpoint and demanded they give you a political affiliation, they would probably either say “liberal” or “libertarian”.

The rationalist community has semi-public events that mimic many of the best parts of religious events, normally based around the solstices (although I also attended a secular Seder when I visited last year).

This secular simulacrum of a religion has been enough to fascinate at least one Catholic.

The rationalist community has managed to do the sort of thing Haidt despaired of: create a strong community with communal morality in a secular, non-authoritarian framework. There are communal norms (although they aren’t very normal; polyamory and vegetarianism or veganism are very common). People tend to think very hard before having children and take care ensuring that any children they have will have a good extended support structure. People live in group houses, which combats atomisation.

This is also a community that is very generous. Many of the early adherents of Effective Altruism were drawn from the rationalist community. It’s likely that rationalists donate to charity in amounts more similar to Mormons than atheists (with the added benefit of almost all of this money going to saving lives, rather than proselytizing).

No community is perfect. This is a community made up of people. It has its fair share of foibles and megalomanias, bad actors and jerks. But it represents something of a counterpoint to Haidt’s arguments about the “deficiency” of a limited framework morality.

Furthermore, its altruism isn’t limited in scope, the way Haidt believes all communal altruism must necessarily be. Rationalists encourage each other to give to causes like malaria eradication (which mainly helps people in Africa), or AI risk (which mainly helps future people). Because there are few cost effective local opportunities to do good (for North Americans), this global focus allows for more lives to be saved or improved per dollar spent.

This is all of it, I think, the natural result of thoughtful people throwing away most cultural traditions and vestiges of traditionalist morality, then seeing what breaks and fixing those things in particular. It’s an example of what I wished for at the end of the last section applied to the real world.

VI – Is or Ought?

I hate to bring up the Hegelian dialectic, but I feel like this book fits neatly into it. We had the thesis: “morality stems from rationality” that was so popular in western political thought. Now we have the antithesis: “morality and rationality are separate horses, with rationality subordinate – and this is right and proper”.

I can’t wait for someone other than Haidt to a write a synthesis; a view that rejects rationalism as the basis of human morality but grapples with the fact that we yearn for perfection.

Haidt, in the words of Joseph Heath, thinks that moral discourse is “essentially confabulatory”, consisting only of made up stories that justify our moral impulses. There may be many ways in which this is true, but it doesn’t account for the fact that some people read Peter Singer’s essay “Famine, Affluence, and Morality” and go donate much of their money to the global poor. It doesn’t account for all those who have listened to the Sermon on the Mount and then abandoned their possessions to live a monastic life.

I don’t care whether you believe in The Absolute, or God, or Allah, or The Cycle of Rebirth, or the World Soul, or The Truth, or nothing at all. You probably have felt that very human yearning to be better. To do better. You’ve probably believed that there is a Good and it can perhaps be comprehended and reached. Maybe this is the last vestiges of my atrophied sanctity foundation talking, but there’s something base about believing that morality is solely a happy accident of how we evolved.

The is/ought fallacy occurs when we take what “is” and decide it is what “ought” to be. If you observe that murder is part of the natural order and conclude that it is therefore moral, you have committed this fallacy.

Haidt has observed the instincts that build towards human morality. His contributions to this field have helped make many things clear and make many conflicts more understandable. But in deciding that these natural tastes are the be-all and end-all of human morality, by putting them ahead of reason, religion, and every philosophical tradition, he has committed this fundamental error.

At the start of the Righteous Mind, Haidt approvingly mentions those scientists who once thought that ethics could be taken away from philosophers and studied instead only by them.

But science can only ever tell us what is, never what ought to be. As a book about science, The Righteous Mind is a success. But as a work on ethics, as an expression of how we ought to behave, it is an abysmal failure.

In this area, the philosophers deserve to keep their monopoly a little longer.

Model, Philosophy

When Remoter Effects Matter

In utilitarianism, “remoter effects” are the result of our actions influencing other people (and are hotly debated). I think that remoter effects are often overstated, especially (as Sir Williams said in Utilitarianism for and against) when they give the conventionally ethical answer. For example, a utilitarian might claim that the correct answer to the hostage dilemma [1] is to kill no one, because killing weakens the sanctity of human life and may lead to more deaths in the future.

When debating remoter effects, I think it’s worthwhile to split them into two categories: positive and negative. Positive remoter effects are when your actions cause others to refrain from some negative action they might otherwise take. Negative remoter effects are when your actions make it more likely that others will engage in a negative action [2].

Of late, I’ve been especially interested in ways that positive and negative remoter effects matter in political disagreements. To what extent will acting in an “honourable” [3] or pro-social way convince one’s opponents to do the same? Conversely, does fighting dirty bring out the same tendency in your opponents?

Some of my favourite bloggers are doubtful of the first proposition:

In “Deontologist Envy”, Ozy writes that we shouldn’t necessarily be nice to our enemies in the hopes that they’ll be nice to us:

In general people rarely have their behavior influenced by their political enemies. Trans people take pains to use the correct pronouns; people who are overly concerned about trans women in bathrooms still misgender them. Anti-racists avoid the use of slurs; a distressing number of people who believe in human biodiversity appear to be incapable of constructing a sentence without one. Social justice people are conscientious about trigger warnings; we are subjected to many tedious articles about how mentally ill people should be in therapy instead of burdening the rest of the world with our existence.

In “The Blues of Self-Regulation”, David Schraub talks about how this specifically applies to Republicans and Democrats:

The problem being that, even when Democrats didn’t change a rule protecting the minority party, Republicans haven’t even blinked before casting them aside the minute they interfered with their partisan agenda.

Both of these points are basically correct. Everything that Ozy says about asshats on the internet is true and David wrote his post in response to Republicans removing the filibuster for Supreme Court nominees.

But I still think that positive remoter effects are important in this context. When they happen (and I will concede that this is rare), it is because you are consistently working against the same political opponents and at least some of those opponents are honourable people. My favourite example here (although it is from war, not politics) is the Christmas Day Truce. This truce was so successful and widespread that high command undertook to move men more often to prevent a recurrence.

In politics, I view positive remoter effects as key to Senator John McCain repeatedly torpedoing the GOP healthcare plans. While Senators Murkowski and Collins framed their disagreements with the law around their constituents, McCain specifically mentioned the secretive, hurried and partisan approach to drafting the legislation. This stood in sharp contrast to Obamacare, which had numerous community consultations, went through committee and took special (and perhaps ridiculous) care to get sixty senators on board.

Imagine that Obamacare had been passed after secret drafting and no consultations. Imagine if Democrats had dismantled even more rules in the senate. They may have gotten a few more of their priorities passed or had a stronger version of Obamacare, but right now, they’d be seeing all that rolled back. Instead of evidence of positive remoter effects, we’d be seeing a clear case of negative ones.

When dealing with political enemies, positive remoter effects require a real sacrifice. It’s not enough not to do things that you don’t want to do anyway (like all the examples Ozy listed) and certainly not enough to refrain from doing things to third parties. For positive remoter effects to matter at all – for your opponents (even the honourable ones) not to say “well, they did it first and I don’t want to lose” – you need to give up some tools that you could use to advance your interests. Tedious journalists don’t care about you scrupulously using trigger warnings, but may appreciate not receiving death threats on Twitter.

Had right-wingers refrained from doxxing feminist activists (or even applied any social consequences at all against those who did so), all principled people on the left would be refusing to engage in doxxing against them. As it stands, that isn’t the case and those few leftists who ask their fellow travelers to refrain are met with the entirely truthful response: “but they started it!”

This highlights what might be an additional requirement for positive remoter effects in the political sphere: you need a clearly delimited coalition from which you can eject misbehaving members. Political parties are set up admirably for this. They regularly kick out members who fail to act as decorously as their office demands. Social movements have a much harder time, with predictable consequences – it’s far too easy for the most reprehensible members of any group to quickly become the representatives, at least as far as tactics are concerned.

Still, with positive remoter effects, you are not aiming at a movement or party broadly. Instead you are seeking to find those honourable few in it and inspire them on a different path. When it works (as it did with McCain), it can work wonders. But it isn’t something to lay all your hopes on. Some days, your enemies wake up and don’t screw you over. Other days, you have to fight.

Negative remoter effects seem so obvious as to require almost no explanation. While it’s hard (but possible) to inspire your opponents to civility with good behaviour, it’s depressingly easy to bring them down to your level with bad behavior. Acting honourably guarantees little, but acting dishonourably basically guarantees a similar response. Insofar as honour is a useful characteristic, it is useful precisely because it stops this slide towards mutual annihilation.

Notes:

[1] In the hostage dilemma, you are one of ten hostages, captured by rebels. The rebel leader offers you a gun with a single bullet. If you kill one of your fellow hostages, all of the survivors (including you) will be let free. If you refuse all of the hostages (including you) will be killed. You are guarded such that you cannot use the weapon against your captors. Your only option is to kill another hostage, or let all of the hostages be killed.

Here, I think remoter effects fail to salvage the conventional answer and the only proper utilitarian response is to kill one of the other hostages. ^

[2] Here I’m using “negative” in a roughly utilitarian sense: negative actions are those that tend to reduce the total utility of the world. When used towards good ends, negative actions consume some of the positive utility that the ends generate. When used towards ill ends, negative actions add even more disutility. This definition is robust against different preferred plans of actions (e.g. it works across liberals and conservatives, who might both agree that political violence tends to reduce utility, even if it doesn’t always reduce utility enough to rule it out in the face of certain ends), but isn’t necessarily robust across all terminal values (e.g. if you care only about reducing suffering and I care only for increasing happiness we may have different opinions on the tendency of reproduction towards good or ill).

Negative actions are roughly equivalent to “defecting”. “Roughly” because it is perhaps more accurate to say that the thing that makes defecting so pernicious is that it involves negative actions of a special class, those that generate extra disutility (possibly even beyond what simple addition would suggest) when both parties engage in them. ^

[3] I used “honourable” in several important places and should probably define it. When discussing actions, I think honourable actions are the opposite of “negative” actions as defined above: actions that tend towards the good, but can be net ill if used for bad ends. When describing “people” as honourable, I’m pointing to people who tend to reinforce norms around cooperation. This is more or less equivalent to being inherently reluctant to use negative actions to advance goals unless provoked.

My favourite example of honour is Salah ad-Din. He sent his own personal physician to tend to King Richard, who was his great enemy and used his own money to buy back a child kidnapped into slavery. Conveniently for me, Salah ad-Din shows both sides of what it means to be honourable. He personally executed Raynald III of Tripoli after Raynald ignored a truce, attacked Muslim caravans, and tortured many of the caravaners to death. To Guy of Lusignan, King of Jerusalem (who was captured in the same battle as Raynald and wrongly feared he was next to die), Salah ad-Din said: “[i]t is not the wont of kings, to kill kings; but that man had transgressed all bounds, and therefore did I treat him thus.” ^

Ethics, Philosophy

Utilitarian Virtue Ethics

[4 minure read]

The nagging question that both halves of Utilitarianism for and against left me with is: “can utilitarianism exist without veering off into total assessment?”

Total assessment is the direct comparison of all the consequences of different actions. It is not so much a prediction that an individual can make as it is the providence of an omniscient god. If you cannot perfectly predict all of the future, you cannot perform a total assessment. It’s conceptually useful – whenever a utilitarian is backed into a corner, they can fall on total assessment as their decision-making tool – but it’s practically useless.

Absent total assessment, utilitarians kind of have to make their best guess and go with it. Even my beloved precedent utilitarianism isn’t much help here; precedent utilitarianism focuses on a class of consequences that traditional utilitarianism can miss. It does little to help an individual figure out all of the consequences of their actions.

If it is hard to guess the effects of outcomes, or if this guessing will be prohibitive in terms of time, what is the utilitarian to do? One appealing option is a distinctly utilitarian virtue ethics. This virtue ethics would define a good life as one lived with the virtues that cause you to make optimific decisions.

I think it is possible for such a system to maintain a distinctly utilitarian character and thereby avoid Williams’ prediction that utilitarianism must, if accepted, “usher itself from the scene.”

The first distinct characteristic of a utilitarian virtue ethics would be its heterogeneity. Classical virtue ethics holds that there are a set of virtues that can cause one to live a good life. The utilitarian would instead seek to cultivate the virtues that would cause her to act in an optimific way. These would necessarily be individualized; it may very well be optimific for an ambitious and clever utilitarian to cultivate greed and drive while acquiring a fortune, then cultivate charity while giving it away (see Bill Gates).

There is the obvious danger here that cultivating temporarily anti-utilitarian virtues could lead to permanent values drift. The best countermeasure against this would be a varied community of utilitarians, who would cultivate a variety of virtues and help bind each other to the shared utilitarian cause, helping whenever expediency threatens to pull one away from it.

Next, a utilitarian virtue ethics would treat no virtue as sacred. Honesty, charity, kindness, and bravery – all of these must be conditional on the best outcome. Because the best outcome is hard to determine, they might be good rules of thumb, but the utilitarian must always be prepared to break a moral rule if there is more utility to be had.

Third, the utilitarian would seek to avoid cognitive biases and learn to make decisions quickly. Avoiding cognitive biases increases the chance that rules of thumb will be broken out of genuine utilitarian concern, rather than thinly veiled self-interest. Learning to make decisions quickly helps avoid the wasted time pondering “what is the right thing to do?”

While the traditional virtue ethicist might read the works of the great classical philosophers to better understand virtue, a utilitarian virtue ethicist would focus on learning Fermi estimation, Bayesian statistics, and the works of Daniel Kahneman.

The easiest ways for a utilitarian to fail is to treat the world as it really is are by ignoring the things they cannot measure, or by ignoring truths they find personally uncomfortable. We did not evolve for clear thinking and there is always the risk that we will get ourselves turned around, substituting what is best for us with what is best for the world.

One hang-up I have with this idea is that I just described a bunch of my friends in the rationality and effective altruism communities. How likely is it that this is merely self-serving, instead of the natural endpoint of all of the utilitarian philosophy I’ve been reading?

On one hand, this is a community of utilitarians who are similar to me, so convergence in outputs given the same inputs is more or less expected.

On the other, this could be a classic example of seeing the world how I wish it, rather than it is. “Go hang out with people you already like, doing the things you were already going to do” isn’t much of an ethical ask. Given that the world is in a dire state, it makes sense for utilitarians to be sceptical that their ethical system won’t require much from them.

There could be other problems with this proposal, but I’m not sure that I’m the type of person who could see them. For now, this represents my best attempt to reconcile my utilitarian ethics with the realities of the modern world. But I will be careful. Ease is ever seductive.

Ethics, Literature, Philosophy

Book Review: Utilitarianism for and against (Part 2)

[33 minute read]

Three weeks ago, I reviewed the first half of Utilitarianism for and against. This week I’ll be reviewing the second half, the against side. I should note that I’m a utilitarian and therefore likely to be biased against the arguments presented here. If my criticism is rather thicker than last week, it is not because the author of the second essay is any worse than the first.

The author is one Sir Bernard Williams. According to his Wikipedia, he was a particularly humanistic philosopher in the old Greek mode. He was skeptical of attempts to build an analytical foundation for moral philosophy and of his own prowess in arguments. It seems that he had something pithy or cutting to say about everything, which made him notably cautious of pithy or clever answers. He’s also described as a proto-feminist, although you wouldn’t know it from his writing.

Williams didn’t write his essay out of a rationalist desire to disprove utilitarianism with pure reason (a concept he seemed every bit as sceptical of as Smart was). Instead, Williams wrote this essay because he agrees with Smart that utilitarianism is a “distinctive way of looking at human action and morality”. It’s just that unlike Smart, Williams finds the specific distinctive perspective of utilitarianism often horrible.

Smart anticipated this sort of reaction to his essay. He himself despaired of finding a single ethical system that could please anyone, or even please a single person in all their varied moods.

One of the very first things I noticed in Williams’ essay was the challenge of attacking utilitarianism on its own terms. To convince a principled utilitarian that utilitarianism is a poor choice of ethical system, it is almost always necessary to appeal to the consequences of utilitarianism. This forces any critic to frame their arguments a certain way, a way which might feel unnatural. Or repugnant.

Williams begins his essay proper with (appropriately) a discussion of consequences. He points out that it is difficult to hold actions as valuable purely by their consequences because this forces us to draw arbitrary lines in time and declare the state of the world at that time the “consequences”. After all, consequences continue to unfold forever (or at least, until the heat death of the universe). To have anything to talk about at all Williams decides that it is not quite consequences that consequentialism cares about, but states of affairs.

Utilitarianism is the form of consequentialism that has happiness as its sole important value and seeks to bring about the state of affairs with the most happiness. I like how Williams undid the begging the question that utilitarianism commonly does. He essentially asks ‘why should happiness be the only thing we treat as intrinsically valuable?’ Williams mercifully didn’t drive this home, but I was still left with uncomfortable questions for myself.

Instead he moves on to his first deep observation. You see, if consequentialism was just about valuing certain states of affairs more than others, you could call deontology a form of consequentialism that held that duty was the only intrinsically valuable thing. But that can’t be right, because deontology is clearly different from consequentialism. The distinction, that Williams suggests is that consequentialists discount the possibility of actions holding any inherent moral weight. For a consequentialist, an action is right because it brings about a better state of affairs. For non-consequentialists, a state of affairs can be better – even if it contains less total happiness or integrity or whatever they care about than a counterfactual state of affairs given a different action – because the right action was taken.

A deontologist would say that it is right for someone to do their duty in a way that ends up publically and spectacularly tragic, such that it turns a thousand people off of doing their own duty. A consequentialist who viewed duty as important for the general moral health of society – who, in Smart’s terminology, viewed acting from duty as good – would disagree.

Williams points out that this very emphasis on comparing states of affairs (so natural to me) is particularly consequentialist and utilitarian. That is to say, it is not particularly meaningful for a deontologist or a virtue ethicist to compare states of affairs. Deontologists have no duty to maximize the doing of duty; if you ask a deontologist to choose between a state of affairs that has one hundred people doing their duty and another that has a thousand, it’s not clear that either state is preferable from their point of view. Sure, deontologists think people should do their duty. But duty embodied in actions is the point, not some cosmic tally of duty.

Put as a moral statement, non-consequentialists lack any obligation to bring about more of what they see as morally desirable. A consequentialist may feel both fondness for and a moral imperative to bring about a universe where more people are happy. Non- consequentialists only have the fondness.

One deontologist of my acquaintance said that trying to maximize utility felt pointless – they viewed it as morally important as having a high score on a Tetris game. We ended up starting at each other in blank incomprehension.

In Williams’ view, rejection of consequentialism doesn’t necessarily lead to deontology, though. He sums it up simply as: “all that is involved… in the denial of consequentialism, is that with respect to some type of action, there are some situations in which that would be the right thing to do, even though the state of affairs produced by one’s doing that would be worse than some other state of affairs accessible to one.”

A deontologist will claim right actions must be taken no matter the consequences, but to be non-consequentalist, an ethical system merely has to claim that some actions are right despite a variety of more or less bad consequences that might arise from them.

Or, as I wrote angrily in the margins: “ok, so not necessarily deontology, just accepting sub-maximal global utility“. It is hard to explain to a non-utilitarian just how much this bugs me, but I’m not going to go all rationalist and claim that I have a good reason for this belief.

Williams then turns his attention to the ways in which he thinks utilitarianism’s insistency on quantifying and comparing everything is terrible. Williams believes that by refusing to categorically rule any action out (or worse, specifically trying to come up with situations in which we might do horrific things), utilitarianism encourages people – even non-utilitarians who bump into utilitarian thought experiments – to think of things in utilitarian (that is to say, explicitly comparative) terms. It seems like Williams would prefer there to be actions that are clearly ruled out, not just less likely to be justified.

I get the impression of a man almost tearing out his hair because for him, there exist actions that are wrong under all circumstances and here we are, talking about circumstances in which we’d do them. There’s a kernel of truth here too. I think there can be a sort of bravado in accepting utilitarian conclusions. Yeah, I’m tough enough that I’d kill one to save one thousand? You wouldn’t? I guess you’re just soft and old-fashioned. For someone who cares as much about virtue as I think Williams does, this must be abhorrent.

I loved how Williams summed this up.

The demand… to think the unthinkable is not an unquestionable demand of rationality, set against a cowardly or inert refusal to follow out one’s moral thoughts. Rationality he sees as a demand not merely on him, but on the situations in and about which he has to think; unless the environment reveals minimum sanity, it is insanity to carry the decorum of sanity into it.

For all that I enjoyed the phrasing, I don’t see how this changes anything; there is nothing at all sane about the current world. A life is worth something like $7 million to $9 million and yet can be  saved for less than $5000. This planet contains some of the most wrenching poverty and lavish luxury imaginable, often in the very same cities. Where is the sanity? If Williams thinks sane situations are a reasonable precondition to sane action, then he should see no one on earth with a duty to act sanely.

The next topic Williams covers is responsibility. He starts by with a discussion of agent interchangeability in utilitarianism. Williams believes that utilitarianism merely requires someone do the right thing. This implies that to the utilitarian, there is no meaningful difference between me doing the utilitarian right action and you doing it, unless something about me doing it instead of you leads to a different outcome.

This utter lack of concern for who does what, as long as the right thing gets done doesn’t actually seem to absolve utilitarians of responsibility. Instead, it tends to increase it. Williams says that unlike adherents of many ethical systems, utilitarians have negative responsibilities; they are just as much responsible for the things they don’t do as they are for the things they do. If someone has to and no one else will, then you have to.

This doesn’t strike me as that unique to utilitarianism. I was raised Catholic and can attest that Catholics (who are supposed to follow a form of virtue ethics) have a notion of negative responsibility too. Every mass, as Catholics ask forgiveness before receiving the Eucharist they ask God for forgiveness for their sins, in thoughts and words, in what they have done and in what they have failed to do.

Leaving aside whether the concept of negative responsibility is uniquely utilitarian or not, Williams does see problems with it. Negative responsibility makes so much of what we do dependent on the people around us. You may wish to spend your time quietly growing vegetables, but be unable to do so because you have a particular skill – perhaps even one that you don’t really enjoy doing – that the world desperately needs. Or you may wish never to take a life, yet be confronted with a run-away trolley that can only be diverted from hitting five people by pulling the lever that makes it hit one.

This didn’t really make sense to me as a criticism until I learned that Williams deeply cares about people living authentic lives. In both the cases above, authenticity played no role in the utilitarian calculus. You must do things, perhaps things you find abhorrent, because other people have set up the world such that terrible outcomes would happen if you didn’t.

It seems that Williams might consider it a tragedy for someone feel compelled by their ethical system to do something that is inauthentic. I imagine he views this as about as much of a crying waste of human potential as I view the yearly deaths of 429,000 people due to malaria. For all my personal sympathy for him I am less than sympathetic to a view that gives these the same weight (or treats inauthenticity as the greater tragedy).

Radical authenticity requires us to ignore society. Yes, utilitarianism plops us in the middle of a web of dependencies and a buffeting sea of choices that were not ours, while demanding we make the best out of it all. But our moral philosophies surely are among the things that push us towards an authentic life. Would Williams view it as any worse that someone was pulled from her authentic way of living because she would starve otherwise?

To me, there is a certain authenticity in following your ethical system wherever it leads. I find this authenticity beautiful, but not worthy of moral consideration, except insofar as it affects happiness. Williams finds this authenticity deeply important. But by rejecting consequentialism, he has no real way to argue for more of the qualities he desires, except perhaps as a matter of aesthetics.

It seems incredibly counter-productive to me to say to people – people in the midst of a society that relentlessly pulls them away from authenticity with impersonal market forces – that they should turn away from the one ethical system that seems to have as the desired outcome a happier system. A Kantian has her duty to duty, but as long as she does that, she cares not for the system. A virtue ethicist wishes to be virtuous and authentic, but outside of her little bubble of virtue, the terrors go on unabated. It’s only the utilitarian who can holds a better society as an end into itself.

Maybe this is just me failing to grasp non-utilitarian epistemologies. It baffles me to hear “this thing is good and morally important, but it’s not like we think it’s morally important for there to be more of it; that goes too far!”. Is this a strawman? If someone could explain what Williams is getting at here in terms I can understand, I’d be most grateful.

I do think Williams misses one key thing when discussing the utilitarian response to negative responsibility. Actions should be assessed on the margin, not in isolation. That is to say, the marginal effect of someone becoming a doctor, or undertaking some other career generally considered benevolent is quite low if there are others also willing to do the job. A doctor might personally save hundreds, or even thousands of lives over her career, but her marginal impact will be saving something like 25 lives.

The reasons for this are manifold. First, when there are few doctors, they tend to concentrate on the most immediately life-threatening problems. As you add more and more doctors, they can help, but after a certain point, the supply of doctors will outstrip the demand for urgent life-saving attention. They can certainly help with other tasks, but they will each save fewer lives than the first few doctors.

Second, there is a somewhat fixed supply of doctors. Despite many, many people wishing they could be doctors, only so many can get spots in medical school. Even assuming that medical school admissions departments are perfectly competent at assessing future skill at being a doctor (and no one really believes they are), your decision to attend medical school (and your successful admission) doesn’t result in one extra doctor. It simply means that you were slightly better than the next best person (who would have been admitted if you weren’t).

Finally, when you become a doctor you don’t replace one of the worst already practising doctors. Instead, you replace a retiring doctor who is (for statistical purposes) about average for her cohort.

All of this is to say that utilitarians should judge actions on the margin, not in absolute terms. It isn’t that bad (from a utilitarian perspective) not devote all your attentions to the most effective direct work, because unless a certain project is very constrained by the number of people working on it, you shouldn’t expect to make much marginal difference. On the other hand, earning a lot of money and giving it to highly effective charities (or even a more modest commitment, like donating 10% of your income) is likely to do a huge amount of good, because most people don’t do this, so you’re replacing a person at a high paying job who was doing (from a utilitarian perspective) very little good.

Williams either isn’t familiar with this concept, or omitted it in the interest of time or space.

Williams next topic is remoter effects. A remoter effect is any effect that your actions have on the decision making of other people. For example, if you’re a politician and you lie horribly, are caught, and get re-elected by a large margin, a possible remoter effect is other politicians lying more often.  With the concept of remoter effects, Williams is pointing at what I call second order utilitarianism.

Williams makes a valid point that many of the justifications from remoter effects that utilitarians make are very weak. For example, despite what some utilitarians claim, telling a white lie (or even telling any lie that is unpublicized) doesn’t meaningfully reduce the propensity of everyone in the world to tell the truth.

Williams thinks that many utilitarians get away with claiming remoter effects as justification because they tend to be used as way to make utilitarianism give the common, respectable answers to ethical dilemmas. He thinks people would be much more skeptical of remoter effects if they were ever used to argue for positions that are uncommonly held.

This point about remoter effects was, I think, a necessary precursor to Williams’ next thought experiment. He asks us to imagine a society with two groups, A and B. There are many more members of A than B. Furthermore, members of A are disgusted by the presence (or even the thought of the presence) of members of group B. In this scenario, there has to exist some level of disgust and some ratio between A and B that makes the clear utilitarian best option relocating all members of group B to a different country.

With Williams’ recent reminder that most remoter effects are weaker than we like to think still ringing in my ears, I felt fairly trapped by this dilemma. There are clear remoter effects here: you may lose the ability to advocate against this sort of ethnic cleansing in other countries. Successful, minimally condemned ethnic cleansing could even encourage copy-cats. In the real world, these are might both be valid rejoinders, but for the purposes of this thought experiment, it’s clear these could be nullified (e.g. if we assume few other societies like this one and a large direct utility gain).

The only way out that Williams sees fit to offer us is an obvious trap. What if we claimed that the feelings of group A were entirely irrational and that they should just learn to live with them? Then we wouldn’t be stuck advocating for what is essentially ethnic cleansing. But humans are not rational actors. If we were to ignore all such irrational feelings, then utilitarianism would no longer be a pragmatic ethical system that interacts with the world as it is. Instead, it would involve us interacting with the world as we wish it to be.

Furthermore, it is always a dangerous game to discount other people’s feelings as irrational. The problem with the word irrational (in the vernacular, not utilitarian sense) is that no one really agrees on what is irrational. I have an intuitive sense of what is obviously irrational. But so, alas, do you. These senses may align in some regions (e.g. we both may view it as irrational to be angry because of a belief that the government is controlled by alien lizard-people), but not necessarily in others. For example, you may view my atheism as deeply irrational. I obviously do not.

Williams continues this critique to point out that much of the discomfort that comes from considering – or actually doing – things the utilitarian way comes from our moral intuitions. While Smart and I are content to discount these feelings, Williams is horrified at the thought. To view discomfort from moral intuitions as something outside yourself, as an unpleasant and irrational emotion to be avoided, is – to Williams – akin to losing all sense of moral identity.

This strikes me as more of a problem for rationalist philosophers. If you believe that morality can be rationally determined via the correct application of pure reason, then moral intuitions must be key to that task. From a materialist point of view though, moral intuitions are evolutionary baggage, not signifiers of something deeper.

Still, Williams made me realize that this left me vulnerable to the question “what is the purpose of having morality at all if you discount the feelings that engender morality in most people?”, a question to which I’m at a loss to answer well. All I can say (tautologically) is “it would be bad if there was no morality”; I like morality and want it to keep existing, but I can’t ground it in pure reason or empiricism; no stone tablets have come from the world. Religions are replete with stone tablets and justifications for morality, but they come with metaphysical baggage that I don’t particularly want to carry. Besides, if there was a hell, utilitarians would have to destroy it.

I honestly feel like a lot of my disagreement with Williams comes from our differing positions on the intuitive/systematizing axis. Williams has an intuitive, fluid, and difficult to articulate sense of ethics that isn’t necessarily transferable or even explainable. I have a system that seems workable and like it will lead to better outcomes. But it’s a system and it does have weird, unintuitive corner cases.

Williams talks about how integrity is a key moral stance (I think motivated by his insistence on authenticity). I agree with him as to the instrumental utility of integrity (people won’t want to work with you or help you if you’re an ass or unreliable). But I can’t ascribe integrity some sort of quasi-metaphysical importance or treat it as a terminal value in itself.

In the section on integrity, Williams comes back to negative responsibility. I do really respect Williams’ ability to pepper his work with interesting philosophical observations. When talking about negative responsibility, he mentions that most moral systems acknowledge some difference between allowing an action to happen and causing it yourself.

Williams believes the moral difference between action and inaction is conceptually important, “but it is unclear, both in itself and in its moral applications, and the unclarities are of a kind which precisely cause it to give way when, in very difficult cases, weight has to be put on it”. I am jealous three times over at this line, first at the crystal-clear metaphor, second at the broadly applicable thought underlying the metaphor, and third at the precision of language with which Williams pulls it off.

(I found Williams a less consistent writer than Smart. Smart wrote his entire essay in a tone of affable explanation and managed to inject a shocking amount of simplicity into a complicated subject. Williams frequently confused me – which I feel comfortable blaming at least in part on our vastly different axioms – but he was capable of shockingly resonant turns of phrase.)

I doubt Williams would be comfortable to come down either way on inaction’s equivalence to action. To the great humanist, it must ultimately (I assume) come down to the individual humans and what they authentically believed. Williams here is scoffing at the very idea of trying to systematize this most slippery of distinctions.

For utilitarians, the absence or presence of a distinction is key to figuring out what they must do. Utilitarianism can imply “a boundless obligation… to improve the world”. How a utilitarian undertakes this general project (of utility maximization) will be a function of how she can affect the world, but it cannot, to Williams, ever be the only project anyone undertakes. If it were the only project, underlain by no other projects, then it will, in Williams words, be “vacuous”.

The utilitarian can argue that her general project will not be the only project, because most people aren’t utilitarian and therefore have their own projects going on. Of course, this only gets us so far. Does this imply that the utilitarian should not seek to convince too many others of her philosophy?

What does it even mean for the general utilitarian project to be vacuous? As best I can tell, what Williams means is that if everyone were utilitarian, we’d all care about maximally increasing the utility of the world, but either be clueless where to start or else constantly tripping over each other (imagine, if you can, millions of people going to sub-Saharan Africa to distribute bed nets, all at the same time). The first order projects that Williams believes must underlay a more general project are things like spending times with friends, or making your family happy. Williams also believes that it might be very difficult for anyone to be happy without some of these more personal projects

I would suggest that what each utilitarian should do is what they are best suited for. But I’m not sure if this is coherent without some coordinating body (i.e. a god) ensuring that people are well distributed for all of the projects that need doing. I can also suppose that most people can’t go that far on willpower. That is to say, there are few people who are actually psychologically capable of working to improve the world in a way they don’t enjoy. I’m not sure I have the best answer here, but my current internal justification leans much more on the second answer than the first.

Which is another way of saying that I agree with Williams; I think utilitarianism would be self-defeating if it suggested that the only project anyone should undertake is improving the world generally. I think a salient difference between us is that he seems to think utilitarianism might imply that people should only work on improving the world generally, whereas I do not.

This discussion of projects leads to Williams talking about the hedonic paradox (the observation that you cannot become happy by seeking out pleasures), although Williams doesn’t reference it by name. Here Williams comes dangerously close to a very toxic interpretation of the hedonic paradox.

Williams believes that happiness comes from a variety of projects, not all of which are undertaken for the good of others or even because they’re particularly fun. He points out that few of these projects, if any, are the direct pursuit of happiness and that happiness seems to involve something beyond seeking it. This is all conceptually well and good, but I think it makes happiness seem too mysterious.

I wasted years of my life believing that the hedonic paradox meant that I couldn’t find happiness directly. I thought if I did the things I was supposed to do, even if they made me miserable, I’d find happiness eventually. Whenever I thought of rearranging my life to put my happiness first, I was reminded of the hedonic paradox and desisted. That was all bullshit. You can figure out what activities make you happy and do more of those and be happier.

There is a wide gulf between the hedonic paradox as originally framed (which is purely an observation about pleasures of the flesh) and the hedonic paradox as sometimes used by philosophers (which treats happiness as inherently fleeting and mysterious). I’ve seen plenty of evidence for the first, but absolutely none for the second. With his critique here, I think Williams is arguably shading into the second definition.

This has important implications for the utilitarian. We can agree that for many people, the way to most increase their happiness isn’t to get them blissed out on food, sex, and drugs, without this implying that we will have no opportunities to improve the general happiness. First, we can increase happiness by attacking the sources of misery. Second, we can set up robust institutions that are conducive to happiness. A utilitarian urban planner would perhaps give just as much thought to ensuring there are places where communities can meet and form as she would to ensuring that no one would be forced to live in squalor.

Here’s where Williams gets twisty though. He wanted us to come to the conclusion that a variety of personal projects are necessary for happiness so that he could remind us that utilitarianism’s concept of negative responsibility puts great pressure on an agent not to have her own personal projects beyond the maximization of global happiness. The argument here seems to be (not for the first time) that utilitarianism is self-defeating because it will make everyone miserable if everyone is a utilitarian.

Smart tried to short-circuit arguments like this by pointing out that he wasn’t attempting to “prove” anything about the superiority of utilitarianism, simply presenting it as an ethical system that might be more attractive if it was better understood. Faced with Williams point here, I believe that Smart would say that he doesn’t expect everyone to become utilitarian and that those who do become utilitarian (and stay utilitarian) are those most likely to have important personal projects that are generally beneficent.

I have the pleasure of reading the blogs and Facebook posts of many prominent (for certain unusual values of prominent) utilitarians. They all seem to be enjoying what they do. These are people who enjoy research, or organizing, or presenting, or thought experiments and have found ways to put these vocations to use in the general utilitarian project. Or people who find that they get along well with utilitarians and therefore steer their career to be surrounded by them. This is basically finding ikigai within the context of utilitarian responsibilities.

Image Credit: Emmy van Deurzen via Wikimedia Commons

Saying that utilitarianism will never be popular outside of those suited for it means accepting we don’t have a universal ethical solution. This is, I think, very pragmatic. It also doesn’t rule out utilitarians looking for ways we can encourage people to be more utilitarian. To slightly modify a phrase that utilitarian animal rights activists use: the best utilitarianism is the type you can stick with; it’s better to be utilitarian 95% of the time then it is to be utilitarian 100% of the time – until you get burnt out and give it up forever.

I would also like to add a criticism of Williams’ complaint that utilitarian actions are overly determined by the actions of others. Namely, the status quo certainly isn’t perfect. If we are to reject action because it is not on the projects we would most like to be doing, then we are tacitly endorsing the status quo. Moral decisions cannot be made in a vacuum and the terrain in which we must make moral decisions today is one marked by horrendous suffering, inequality, and unfairness.

The next two sections of Williams’ essay were the most difficult to parse, but also the most rewarding. They deal with the interplay between calculating utilities and utilitarianism and question the extent to which utilitarianism is practical outside of appealing to the idea of total utility. That is to say, they ask if the unique utilitarian ethical frame can, under practical conditions have practical effects.

To get to the meat of Williams points, I had to wade through what at times felt like word games. All of the things he builds up to throughout these lengthy sections begin with a premise made up of two points that Williams thinks are implied by Smart’s essay.

  1. All utilities should be assessed in terms of acts. If we’re talking about rules, governments, or dispositions, their utility stems from the acts they either engender or prevent.
  2. To say that a rule (as an example) has any effect at all, we must say that it results in some change in acts. In Williams’ words: “the total utility effect of a rule’s obtaining must be cashable in terms of the effects of acts.

Together, (1) and (2) make up what Williams calls the “act-adequacy” premise. If the premise is true, there must be no surplus source of utility outside of acts and, as Smart said, rule utilitarianism should (if it is truly concerned with optimific outcomes) collapse to act utilitarianism. This is all well and good when comparing systems as tools of total assessment (e.g. when we take the universe wide view that I criticized Smart for hiding in), but Williams is first interested in how this causes rule and act utilitarianism to relate with actions

If you asked an act-utilitarian and a rule utilitarian “what makes that action right”, they would give different answers. The act utilitarian would say that it is right if it maximizes utility, but the rule utilitarian would say it is right if it is in accordance with rules that tend to maximize utility. Interestingly, if the act-adequacy premise is true, then both act and rule utilitarians would agree as to why certain rules or dispositions are desirable, namely, that actions that results from those rules or dispositions tends to maximize utility.

(Williams also points out that rules, especially formal rules, may derive utility from sources other than just actions following the rule. Other sources of utility include: explaining the rule, thinking about the rule, avoiding the rule, or even breaking the rule.)

But what to do we do when actually faced with the actions that follow from a rule or disposition? Smart has already pointed out that we should praise or blame based on the utility of the praise/blame, not on the rightness or wrongness of the action we might be praising.

In Williams’ view, there are two problems with this. First, it is not a very open system. If you knew someone was praising or blaming you out of a desire to manipulate your future actions and not in direct relation to their actual opinion of your past actions, you might be less likely to accept that praise or blame. Therefore, it could very well be necessary for the utilitarian to hide why acts are being called good or bad (and therefore the reasons why they praise or blame).

The second problem is how this suggests utilitarians should stand with themselves. Williams acknowledges that utilitarians in general try not to cry over spilt milk (“[this] carries the characteristically utilitarian thought that anything you might want to cry over is, like milk, replaceable”), but argues that utilitarianism replaces the question of “did I do the right thing?” with “what is the right thing to do?” in a way that may not be conducive to virtuous thought.

(Would a utilitarian Judas have lived to old age contentedly, happy that he had played a role in humankind’s eternal salvation?)

The answer to “what is the right thing to do?” is of course (to the utilitarian) “that which has the best consequences”. Except “what is the right thing to do?” isn’t actually the right question to ask if you’re truly concerned with the best consequences. In that case, the question is “if asking this question is the right thing to do, what actions have the best consequences?”

Remember, Smart tried to claim that utilitarianism was to only be used for deliberative actions. But it is unclear which actions are the right ones to take as deliberative, especially a priori. Sometimes you will waste time deliberating, time that in the optimal case you would have spent on good works. Other times, you will jump into acting and do the wrong thing.

The difference between act (direct) and rule (indirect) utilitarianism therefore comes to a question of motivation vs. justification. Can a direct utilitarian use “the greatest total good” as a motivation if they do not know if even asking the question “what will lead to the greatest total good?” will lead to it? Can it only ever be a justification? The indirect utilitarian can be motivated by following a rule and justify her actions by claiming that generally followed, the rule leads to the greatest good, but it is unclear what recourse (to any direct motivation for a specific action) the direct utilitarian has.

Essentially, adopting act utilitarianism requires you to accept that because you have accepted act utilitarianism you will sometimes do the wrong thing. It might be that you think that you have a fairly good rule of thumb for deliberating, such that this is still the best of your options to take (and that would be my defense), but there is something deeply unsettling and somewhat paradoxical about this consequence.

Williams makes it clear that the bad outcomes here aren’t just loss of an agent’s time. This is similar in principle to how we calculate the total utility of promulgating a rule. We accept that the total effects of the promulgation must include the utility or disutility that stems from avoiding it or breaking it, in addition to the utility or disutility of following. When looking at the costs of deliberation, we should also include the disutility that will sometimes come when we act deliberately in a way that is less optimific than we would have acted had we spontaneously acted in accordance with our disposition or moral intuitions.

This is all in the case where the act-adequacy premise is true. If it isn’t, the situation is more complex. What if some important utility of actions comes from the mood they’re done in, or in them being done spontaneously? Moods may be engineered, but it is exceedingly hard to engineer spontaneity. If the act-adequacy premise is false, then it may not hold that the (utilitarian) best world is one in which right acts are maximized. In the absence of the act-adequacy premise it is possible (although not necessarily likely) that the maximally happy world is one in which few people are motivated by utilitarian concerns.

Even if the act-adequacy premise holds, we may be unable to know if our actions are at all right or wrong (again complicating the question of motivation).

Williams presents a thought experiment to demonstrate this point. Imagine a utilitarian society that noticed its younger members were liable to stray from the path of utilitarianism. This society might set up a Truman Show-esque “reservation” of non-utilitarians, with the worst consequences of their non-utilitarian morality broadcasted for all to see. The youth wouldn’t stray and the utility of the society would be increased (for now, let’s beg the question of utilitarianism as a lived philosophy being optimific).

Here, the actions of the non-utilitarian holdouts would be right; on this both utilitarians (looking from a far enough remove) and the subjects themselves would agree. But this whole thing only works if the viewers think (incorrectly) that the actions they are seeing are wrong.

From the global utilitarian perspective, it might even be wrong for any of the holdouts to become utilitarian (even if utilitarianism was generally the best ethical system). If the number of viewers is large enough and the effect of one fewer irrational holdout is strong enough (this is a thought experiment, so we can fiddle around with the numbers such that this is indeed true), the conversion of a hold-out to utilitarianism would be really bad.

Basically, it seems possible for there to be a large difference between the correct action as chosen by the individual utilitarian with all the knowledge she has and the correct action as chosen from the perspective of an omniscient observer. From the “total assessment” perspective, it is even possible that it would be best that there be no utilitarians.

Williams points out that many of the qualities we value and derive happiness from (stubborn grit, loyalty, bravery, honour) are not well aligned with utilitarianism. When we talked about ethnic cleansing earlier, we acknowledged that utilitarianism cannot distinguish between preferences people have and the preferences people should have; both are equally valid. With all that said, there’s a risk of resolving the tension between non-utilitarian preferences and the joy these preferences can bring people by trying to shape the world not towards maximum happiness, but towards the happiness easiest to measure and most comfortable to utilitarians.

Utilitarianism could also lead to disutility because of the game theoretic consequences. On international projects or projects between large groups of people, sanctioning other actors must always be an option. Without sanctioning, the risk of defection is simply too high in many practical cases. But utilitarians are uniquely compelled to sanction (or else surrender).

If there is another group acting in an uncooperative or anti-utilitarian matter, the utilitarians must apply the least terrible sanction that will still be effective (as the utility of those they’re sanctioning still matters). The other group will of course know this and have every incentive to commit to making any conflict arising from the sanction so terrible as to make any sanctioning wrong from a utilitarian point of view. Utilitarians now must call the bluff (and risk horrible escalating conflict), or else abandon the endeavour.

This is in essence a prisoner’s dilemma. If the non-utilitarians carry on without being sanctioned, or if they change their behaviour in response to sanctions without escalation, everyone will be better off (then in the alternative). But if utilitarians call the bluff and find it was not a bluff, then the results could be catastrophic.

Williams seems to believe that utilitarians will never include an adequate fudge factor for the dangers of mutual defecting. He doesn’t suggest pacifism as an alternative, but he does believe that violent sanctioning should always be used at a threshold far beyond where he assesses the simple utilitarian one to lie.

This position might be more of a historical one, in reaction to the efficiency, order, and domination obsessed Soviet Communism (and its Western fellow travelers), who tended towards utilitarian justifications. All of the utilitarians I know are committed classical liberals (indeed, it sometimes seems to me that only utilitarians are classic liberals these days). It’s unclear if Williams’ criticism can be meaningfully applied to utilitarians who have internalized the severe detriments of escalating violence.

While it seems possible to produce a thought experiment where even such committed second order utilitarians would use the wrong amount of violence or sanction too early, this seems unlikely to come up in a practical context – especially considering that many of the groups most keen on using violence early and often these days aren’t in fact utilitarian. Instead it’s members of both the extreme left and right, who have independently – in an amusing case of horseshoe theory – adopted a morality based around defending their tribe at all costs. This sort of highly local morality is anathema to utilitarians.

Williams didn’t anticipate this shift. I can’t see why he shouldn’t have. Utilitarians are ever pragmatic and (should) understand that utilitarianism isn’t served by starting horrendous wars willy-nilly.

Then again, perhaps this is another harbinger of what Williams calls “utilitarianism ushering itself from the scene”. He believes that the practical problems of utilitarian ethics (from the perspective of an agent) will move utilitarianism more and more towards a system of total assessment. Here utilitarianism may demand certain things in the way of dispositions or virtues and certainly it will ask that the utility of the world be ever increased, but it will lose its distinctive character as a system that suggests actions be chosen in such a way as to maximize utility.

Williams calls this the transcendental viewpoint and pithily asks “if… utilitarianism has to vanish from making any distinctive mark in the world, being left only with the total assessment from the transcendental standpoint – then I leave if for discussion whether that shows that utilitarianism is unacceptable or merely that no one ought to accept it.”

This, I think, ignores the possibility that it might become easier in the future to calculate the utility of certain actions. The results of actions are inherently chaotic and difficult to judge, but then, so is the weather. Weather prediction has been made tractable by the application of vast computational power. Why not morality? Certainly, this can’t be impossible to envision. Iain M. Banks wrote a whole series of books about it!

Of course, if we wish to be utilitarian on a societal level, we must currently do so without the support of godlike AI. Which is what utilitarianism was invented for in the first place. Here it was attractive because it is minimally committed – it has no elaborate theological or philosophical commitments buttressing it, unlike contemporaneous systems (like Lockean natural law). There is something intuitive about the suggestion that a government should only be concerned for the welfare of the governed.

Sure, utilitarianism makes no demands on secondary principles, Williams writes, but it is extraordinarily demanding when it comes to empirical information. Utilitarianism requires clear, comprehensible, and non-cyclic preferences. For any glib rejoinders about mere implementation details, Williams has this to say:

[These problems are] seen in the light of a technical or practical difficulty and utilitarianism appeals to a frame of mind in which technical difficulty, even insuperable technical difficulty, is preferable to moral unclarity, no doubt because it is less alarming.

Williams suggests that the simplicity of utilitarianism isn’t a virtue, only indicative of “how little of the world’s luggage it is prepared to pick up”. By being immune to concerns of justice or fairness (except insofar as they are instrumentally useful to utilitarian ends), Williams believes that utilitarianism fails at many of the tasks that people desire from a government.

Personally, I’m not so sure a government commitment to fairness or justice is at all illuminating. There are currently at least two competing (and mutually exclusive) definitions of both fairness and justice in political discourse.

Should fairness be about giving everyone the same things? Or should it be about giving everyone the tools they need to have the same shot at meaningful (of course noting that meaningful is a societal construct) outcomes? Should justice mean taking into account mitigating factors and aiming for reconciliation? Or should it mean doing whatever is necessary to make recompense to the victim?

It is too easy to use fairness or justice as a sword without stopping to assess who it aimed at and what the consequences of the aim is (says the committed consequentialist). Fairness and justice are meaty topics that deserve better than to be thrown around as a platitudinous counterargument to utilitarianism.

A much better critique of utilitarian government can be made by imagining how such a government would respond to non-utilitarian concerns. Would it ignore them? Or would it seek to direct its citizens to have only non-utilitarian concerns? The latter idea seems practically impossible. The first raises important questions.

Imagine a government that is minimally responsive to non-utilitarian concerns. It primarily concerns itself with maximizing utility, but accepts the occasional non-utilitarian decision as the cost it must pay to remain in power (presume that the opposition is not utilitarian and would be very responsive to non-utilitarian concerns in a way that would reduce the global utility). This government must necessarily look very different to the utilitarian elite who understand what is going on and the masses who might be quite upset that the government feels obligated to ignore many of their dearly held concerns.

Could such an arrangement exist with a free media? With free elections? Democracies are notably less corrupt than autocracies, so there are significant advantages to having free elections and free media. But how, if those exist, does the utilitarian government propose to keep its secrets hidden from the population? And if the government was successful, how could it respect its citizens, so duped?

In addition to all that, there is the problem of calculating how to satisfy people’s preferences. Williams identifies three problems here:

  1. How do you measure individual welfare?
  2. To what extent is welfare comparative?
  3. How do you develop the aggregate social preference given the answer to the proceeding two questions?

Williams seems to suggest that a naïve utilitarian approach involves what I’ve think is best summed up in a sick parody of Marx: from each according to how little they’ll miss it, to each according to how much they desire it. Surely there cannot be a worse incentive structure imaginable than the one naïve utilitarianism suggests?

When dealing with preferences, it is also the case that utilitarianism makes no distinction between fixing inequitable distributions that cause discontent or – as observed in America – convincing those affected by inequitable distributions not to feel discontent.

More problems arise around substitution or compensation. It may be more optimific for a roadway to be built one way than another and it may be more optimific for compensation to be offered to those who are affected, but it is unclear that the compensation will be at all worth it for those affected (to claim it would be, Williams declares, is “simply an extension of the dogma that every man has his price”). This is certainly hard for me to think about, even (or perhaps especially) because the common utilitarian response is a shrug – global utility must be maximized, after all.

Utilitarianism is about trade-offs. And some people have views which they hold to be beyond all trade-off. It is even possible for happiness to be buttressed or rest entirely upon principles – principles that when dearly and truly held cannot be traded-off against. Certainly, utilitarians can attempt to work around this – if such people are a minority, they will be happily trammelled by a utilitarian majority. But it is unclear what a utilitarian government could do in a such a case where the majority of their population is “afflicted” with deeply held non-utilitarian principles.

Williams sums this up as:

Perhaps humanity is not yet domesticated enough to confine itself to preferences which utilitarianism can handle without contradiction. If so, perhaps utilitarianism should lope off from an unprepared mankind to deal with problems it finds more tractable – such as that present by Smart… of a world which consists only of a solitary deluded sadist.

Finally, there’s the problem of people being terrible judges of what they want, or simply not understanding the effects of their preferences (as the Americas who rely on the ACA but want Obamacare to be repealed may find out). It is certainly hard to walk the line between respecting preferences people would have if they were better informed or truly understood the consequences of their desires and the common (leftist?) fallacy of assuming that everyone who held all of the information you have must necessarily have the same beliefs as you.

All of this combines to make Williams view utilitarianism as dangerously irresponsible as a system of public decision making. It assumes that preferences exist, that the method of collecting them doesn’t fail to capture meaningful preferences, that these preferences would be vindicated if implemented, and that there’s a way to trade-off among all preferences.

To the potential utilitarian rejoinder that half a loaf is better than none, he points out a partial version of utilitarianism is very vulnerable to the streetlight effect. It might be used where it can and therefore act to legitimize – as “real”– concerns in the areas where it can be used and delegitimize those where it is unsuitable. This can easily lead to the McNamara fallacy; deliberate ignorance of everything that cannot be quantified:

The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.

— Daniel Yankelovich “Corporate Priorities: A continuing study of the new demands on business.” (1972)

This isn’t even to mention something that every serious student of economics knows: that when dealing with complicated, idealized systems, it is not necessarily the non-ideal system that is closest to the ideal (out of all possible non-ideal systems) that has the most benefits of the ideal. Economists call this the “theory of the second best”. Perhaps ethicists might call it “common sense” when applied to their domain?

Williams ultimately doubts that systematic though is at all capable of dealing with the myriad complexities of political (and moral) life. He describes utilitarianism as “having too few thoughts and feelings to match the world as it really is”.

I disagree. Utilitarianism is hard, certainly. We do not agree on what happiness is, or how to determine which actions will most likely bring it, fine. Much of this comes from our messy inbuilt intuitions, intuitions that are not suited for the world as it now is. If utilitarianism is simple minded, surely every other moral system (or lack of system) must be as well.

In many ways, Williams did shake my faith in utilitarianism – making this an effective and worthwhile essay. He taught me to be fearful of eliminating from consideration all joys but those that the utilitarian can track. He drove me to question how one can advocate for any ethical system at all, denied the twin crutches of rationalism and theology. And he further shook my faith in individuals being able to do most aspects of the utilitarian moral calculus. I think I’ll have more to say on that last point in the future.

But by their actions you shall know the righteous. Utilitarians are currently at the forefront of global poverty reduction, disease eradication, animal suffering alleviation, and existential risk mitigation. What complexities of the world has every other ethical system missed to leave these critical tasks largely to utilitarians?

Williams gave me no answer to this. For all his beliefs that utilitarianism will have dire consequences when implemented, he has no proof to hand. And ultimately, consequences are what you need to convince a consequentialist.

Ethics, Literature, Philosophy

Book Review: Utilitarianism for and against (Part 1)

Utilitarianism for and against is an interesting little book. It’s comprised of back-to-back ~70 page essays, one in favour of utilitarianism and one opposed. As an overview, it’s hard to beat something like this. You don’t have to rely on one scholar to give you her (ostensibly fair and balanced) opinion; you get two articulate philosophers arguing their side as best they can. Fair and balanced is by necessity left as an exercise to the reader (honestly, it always is; here at least it’s explicit).

I’m going to cover the “for” side first. The “against” side will be in later blog post. Both reviews are going to assume that you have some understanding of utilitarianism. If you don’t, go read my primer. Or be prepared to Google. I should also mention that I have no aspirations of being balanced myself. I’m a utilitarian; I had much more to disagree with on the “against” side than on the “for” side.

Professor J.J.C Smart makes the arguments in favour of utilitarianism. According to his Wikipedia entry, he was known for “outsmarting” his opponents, that is to say, accepting the conclusions of their reductio ad absurdum arguments with nary a shrug. He was, I’ve gathered, not one for moral intuitions. His criticism of rule utilitarianism played a role in its decline and he was influential in raising the next crop of Australian utilitarians, among whom Peter Singer is counted. As near as I can tell, he was one of the more notable defenders of utilitarianism when this volume was published in 1971 (although much of his essay dates back a decade earlier).

Smart is emphatically not a rationalist (in the philosophical sense); he writes no “proof of utilitarianism” and denies that such a proof is even possible. Instead, Smart restricts himself to explaining how utilitarianism is an attractive ethical system for anyone possessed of general benevolence. Well, I’ll say “everyone”. The authors of this volume seem to be labouring under the delusion that only men have ethical dilemmas or the need for ethical systems. Neither one of them manages the ethicist’s coup of realizing that women might be viewed as full people at the remove of half a century from their time of writing (such a coup would perhaps have been strong evidence of the superiority of one philosophy over another).

A lot of Smart’s essay consists of showing how various different types of utilitarianism are all the same under the hood. I’ve termed these “collapses”, although “isomorphisms” might be a better term. There are six collapses in all.

The very first collapse put me to mind of the famous adage about ducks. If it walks like a duck, swims like a duck, and quacks like a duck, it is a duck. By the same token, if someone acts exactly how a utilitarian in their position and with their information would act, then it doesn’t matter if they are a utilitarian or not. From the point of view of an ethical system that cares only about consequences they may as well be.

The next collapse deals with rule utilitarianism and may have a lot to do with its philosophical collapse. Smart points out that if you are avoiding “rule worship”, then you will face a quandary when you could break a rule in such a way as to gain more utility. Rule utilitarians sometimes claim that you just need rules with lots of exceptions and special cases. Smart points out that if you carry this through to its logical conclusion, you really are only left with one rule, the meta-rule of “maximize expected utility”. In this way, rule utilitarianism collapses into act utilitarianism.

Next into the compactor is the difference between ideal and hedonic utilitarians. Briefly, ideal utilitarians hold that some states of mind are inherently valuable (in a utilitarian sense), even if they aren’t particularly pleasant from the inside. “Better Socrates dissatisfied than a fool satisfied” is the rallying cry of ideal utilitarians. Hedonic utilitarians have no terminal values beyond happiness; they would gladly let almost the entirety of the human race wirehead.

Smart claims that while these differences are philosophically large, they are practically much less meaningful. Here Smart introduces the idea of the fecundity of a pleasure. A doctor taking joy (or grim satisfaction) in saving a life is a much more fecund pleasure than a gambler’s excitement at a good throw, because it brings about greater joy once you take into account everyone around the actor. Many of the other pleasures (like writing or other intellectual pursuits) that ideal utilitarians value are similarly fecund. They either lead to abatement of suffering (the intellectual pursuits of scientists) or to many people’s pleasure (the labour of the poet). Taking into account fecundity, it was better for Smart to write this essay than to wirehead himself, because many other people – like me – get to enjoy his writing and have fun thinking over the thorny issues he raises.

Smart could have stood to examine at greater length just why ideal utilitarians value the things they do. I think there’s a decent case to be made that societies figure out ways to value certain (likely fecund) pleasures all on their own, no philosophers required. It is not, I think, that ideal utilitarians have stumbled onto certain higher pleasures that they should coax their societies into valuing. Instead, their societies have inculcated them with a set of valued activities, which, due to cultural evolution, happen to line up well with fecund pleasures. This is why it feels difficult to argue with the list of pleasures ideal utilitarians proffer; it’s not that they’ve stumbled onto deep philosophical truths via reason alone, it’s that we have the same inculcations they do.

Beyond simple fecundity though, there is the fact that the choice between Socrates dissatisfied and a fool satisfied rarely comes up. Smart has a great line about this:

But even the most avid television addict probably enjoys solving practical problems connected with his car, his furniture, or his garden. However unintellectual he might be, he would certainly resist the suggestion the he should, if it were possible, change places with a contented sheep, or even a happy and lively dog.

This boils down to: ‘ideal utilitarians assume they’re a lot better than everyone else, what with their “philosophical pursuits”, but most people don’t want purely mindless pleasures’. Combined, these ideas of fecundity and hidden depths, point to a vanishingly small gap between ideal and hedonistic utilitarians, especially compared to the gap between utilitarians and practitioners of other ethical systems.

After dealing with questions of how highly we should weigh some pleasures, Smart turns to address the idea of some pleasures not counting at all. Take, for example, the pleasure that a sadist takes in torturing a victim. Should we count this pleasure in our utilitarian moral calculus? Smart says yes, for reasons that again boil down to “certain pleasures being viewed as bad are an artifact of culture; no pleasure is intrinsically bad.

(Note however that this isn’t the same thing as Smart condoning the torture. He would say that the torture is wrong because the pleasure the sadist gains from it cannot make up for the distress of the victim. Given that no one has ever found a real live utility monster, this seems a safe position to take.)

In service of this, Smart presents a thought experiment. Imagine a barren universe inhabited by a single sentient being. This sentient being wrongly believes that there are many other inhabitants of the universe being gruesomely tortured and takes great pleasure in this thought. Would the universe be better if the being didn’t derive pleasure from her misapprehension?

The answer here for both Smart and me is no (although I suspect many might disagree with us). Smart reasons (almost tautologically) that since there is no one for this being to hurt, her predilection for torture can’t hurt anyone. We are rightfully wary of people who unselfconsciously enjoy the thought of innocents being tortured because of what it says about what their hobbies might be. But if they cannot hurt anyone, their obsession is literally harmless. This bleak world would not be better served by its single sentient inhabitant quailing at the thought of the imaginary torture.

Of course, there’s a wide gap between the inhabitant curled up in a ball mourning the torture she wrongly believes to be ongoing and her simple ambivalence to it. It seems plausible that many people could consider her ambivalence preferable, even if they did not wish her to be sad. But imagine then the difference being between her lonely and bored and her satisfied and happy (leaving aside for a moment the torture). It is clear here which is the better universe. Given a way to move from the universe with a single bored being to the one with a single fulfilled being, shouldn’t we take it, given that the shift most literally harms no one?

This brings us to the distinction between intrinsically bad pleasures and extrinsically bad pleasures – the flip side of the intrinsically more valuable states of mind of the ideal utilitarian. Intrinsically bad pleasures are pleasures that for some rationalist or metaphysical reason are just wrong. Their rightness or wrongness must of course be vulnerable to attacks on the underlying logic or theology, but I can hardly embark on a survey of common objections to all the common underpinnings; I haven’t the time. But many people have undertaken those critiques and many will in the future, making a belief in intrinsically bad pleasures a most unstable place to stand.

Extrinsically bad pleasures seem like a much safer proposition (and much more convenient to the utilitarian who wishes to keep their ethical system free of meta-physical or meta-ethical baggage). To say that a pleasure is extrinsically bad is simply to say that to enjoy it causes so much misery that it will practically never be moral to experience it. Similar to how I described ideal utilitarian values as heavily culturally influenced, I can’t help but feel that seeing some pleasures as intrinsically bad has to be the result of some cultural conditioning.

If we can accept that certain pleasures are not intrinsically good or ill, but that many pleasures that are thought of as intrinsically good or ill are thought so because of long cultural experience – positive or negative – with the consequences of seeking them out, then we should see the position of utilitarians who believe that some pleasures cannot be counted in the plus column collapse to approximately the same as those who hold that they can, even if neither accepts the position of the other. The utilitarian who refuses to believe in intrinsically bad pleasures should still condemn most of the same actions as one who does, because she knows that these pleasures will be outweighed by the pains they inflict on others (like the pain of the torture victim overwhelming the joy of the torturer).

There is a further advantage to holding that pleasures cannot be intrinsically wrong. If we accept the post-modernists adage that knowledge is created culturally, we will remember to be skeptical of the universality of our knowledge. That is to say, if you hold a list of intrinsically bad pleasures, it will probably not be an exhaustive list and there may be pleasures whose ill-effects you overlook because you are culturally conditioned to overlook them. A more thoughtful utilitarian who doesn’t take the short-cut of deeming some pleasures intrinsically bad can catch these consequences and correctly advocate against these ultimately wrong actions.

The penultimate collapse is perhaps the least well supported by arguments. In a scant page, Smart addresses the differences between total and average happiness in a most unsatisfactory fashion. He asks which of two universes you might prefer: one with one million happy, healthy people, or one with twice as many people, equally happy and healthy. Both Smart and I feel drawn to the larger universe, but he has no arguments for people who prefer the smaller. Smart skips over the difficulties here with an airy statement of “often the best way to increase the average happiness is to increase the total happiness and vice versa”.

I’m not entirely sure this statement is true. How would one go about proving it?

Certainly, average happiness seems to miss out on the (to me) obvious good that you’d get if you could have twice as many happy people (which is clearly one case where they give different answers), but like Smart, I have trouble coming up with a persuasive argument why that is obviously good.

I do have one important thing myself to say about the difference between average and total happiness. When I imagine a world with more people who are on average less happy than the people that currently exist (but collectively experience a greater total happiness) I feel an internal flinch.

Unfortunately for my moral intuitions, I feel the exact same flinch when I image a world with many fewer people, who are on average transcendentally happy. We can fiddle with the math to make this scenario come out to have greater average and total happiness than the current world. Doesn’t matter. Exact same flinch.

This leads me to believe that my moral intuitions have a strong status quo bias. The presence of a status quo bias in itself isn’t an argument for either total or average utilitarianism, but it is a reminder to be intensely skeptical of our response to thought experiments that involve changing the status quo and even to be wary of the order that options are presented in.

The final collapse Smart introduces is that between regular utilitarians and negative utilitarians. Negative utilitarians believe that only suffering is morally relevant and that the most important moral actions are those that have the consequence of reducing suffering. Smart points out that you can raise both the total and average happiness of a population by reducing suffering and furthermore that there is widespread agreement on what reduces suffering. So Smart expects utilitarians of all kinds (including negative) to primarily focus on reducing suffering anyway. Basically, despite the profound philosophical differences between regular and negative utilitarians, we should expect them to behave equivalently. Which, by the very first collapse (if it walks like a duck…), shows that we can treat them as philosophical equivalents, at least in the present world.

In my experience, this is more or less true. Many of the negative utilitarians I am aware of mainly exercise their ethics by donating 10% of their income to GiveWell’s most effective charities. The regular utilitarians… do the exact same. Quack.

As far as I can tell, Smart goes to all this work to show how many forms of utilitarianism collapse together so that he can present a system that isn’t at war with itself. Being able to portray utilitarianism as a simple, unified system (despite the many ways of doing it) heads off many simple criticisms.

While I doubt many people avoided utilitarianism because there are lingering questions about total versus average happiness, per se, these little things add up. Saying “yes, there are a bunch of little implementation details that aren’t agreed upon” is a bad start to an ethical system, unless you can immediately follow it up with “but here’s fifty pages of why that doesn’t matter and you can just do what comes naturally to you (under the aegis of utilitarianism)”.

Let’s talk a bit about what comes naturally to people outside the context of different forms of utilitarianism. No one, not even Smart, sits down and does utilitarian calculus before making every little decision. For most tasks, we can ignore the ethical considerations (e.g. there is broad, although probably not universal agreement that there aren’t hidden moral dimensions to opening a door). For some others, our instincts are good enough. Should you thank the woman at the grocery store checkout? You probably will automatically, without pausing to consider if it will increase the total (or average) happiness of the world.

Like in the case of thanking random service industry workers, there are a variety of cases where we actually have pretty good rules of thumb. These rules of thumbs serve two purposes. First, they allow us to avoid spending all of our time contemplating if our actions are right or wrong, freeing us to actually act. Second, they protect us from doing bad things out of pettiness or venality. If you have a strong rule of thumb that violence is an inappropriate response to speech you disagree with, you’re less likely to talk yourself into punching an odious speaker in the face when confronted with them.

It’s obviously important to pick the right heuristics. You want to pick the ones that most often lead towards the right outcomes.

I say “heuristics” and “rules of thumbs” because the thing about utilitarians and rules is that they always have to be prepared to break them. Rules exist for the common cases. Utilitarians have to be on guard for the uncommon cases, the ones where breaking a rule leads to greater good overall. Having a “don’t cause people to die” rule is all well and good. But you need to be prepared to break it if you can only stop mass death from a runaway trolley by pushing an appropriately sized person in front of it.

Smart seems to think that utilitarianism only comes up for deliberative actions, where you take the time to think about them and that it shouldn’t necessarily cover your habits. This seems like an abrogation to me. Shouldn’t a clever utilitarian, realizing that she only uses utilitarianism for big decisions spend some time training her reflexes to more often give the correct utilitarian solution, while also training herself to be more careful of her rules of thumb and think ethically more often? Smart gave no indication that he thinks this is the case.

The discussion of rules gives Smart the opportunity to introduce a utilitarian vocabulary. An action is right if it is the one that maximizes expected happiness (crucially, this is a summation across many probabilities and isn’t necessarily the action that will maximize the chance of the happiest outcome) and wrong otherwise. An action is rational if a logical being in possession of all the information you possess would think you to be right if you did it. All other actions are irrational. A rule of thumb, disposition, or action is good if it tends to lead to the right outcomes and bad if it tends to lead to the wrong ones.

This vocabulary becomes important when Smart talks about praise, which he believes is an important utilitarian concern in its own right. Praise increases people’s propensity towards certain actions or dispositions, so Smart believes a utilitarian aught to consider if the world would be better served by more of the same before she praises anything. This leads to Smart suggesting that utilitarians should praise actions that are good or rational even if they aren’t right.

It also implies that utilitarians doing the right thing must be open to criticism if it requires bad actions. One example Smart gives is a utilitarian Frenchman cheating on wartime rationing in 1940s England. The Frenchman knows that the Brits are too patriotic to cheat, so his action (and the actions of the few others that cheat) will probably fall below the threshold for causing any real harm, while making him (and the other cheaters) happier. The calculus comes out positive and the Frenchman believes it to be the right action. Smart acknowledges that this logic is correct, but he points out that by the similar logic, the Frenchman should agree that he must be severely punished if caught, so as to discourage others from doing the same thing.

This actually reminds me of something Hannah Arendt brushed up against in Eichmann in Jerusalem while talking about how the moral constraints on people are different than the ones on states. She gives the example of Soghomon Tehlirian, the Armenian exile who assassinated one of the triumvirate of Turkish generals responsible for the Armenian genocide. Arendt believes that it would have been wrong for the Armenian government to assassinate the general (had one even existed at the time), but that it was right for a private citizen to do the deed, especially given that Tehlirian did not seek to hide his crimes or resist arrest.

From a utilitarian point of view, the argument would go something like this: political assassinations are bad, in that they tend to cause upheaval, chaos, and ultimately suffering. On the other hand, there are some leaders who the world would clearly be better off without, if not to stop their ill deeds in their tracks, then to strike fear and moderation into the hearts of similar leaders.

Were the government of any country to carry out these assassinations, it would undermine the government’s ability to police murder. But when a private individual does the deed and then immediately gives herself up into the waiting arms of justice, the utility of the world is increased. If she has erred in picking her target and no one finds the assassination justified, then she will be promptly punished, disincentivizing copy-cats. If instead, like Tehlirian, she is found not guilty, it will only be because the crimes committed by the leader she assassinated were so brutal and clear that no reasonable person could countenance them. This too sends a signal.

That said, I think Smart takes his distinctions between right and good a bit too far. He cautions against trying to change the non-utilitarian morality of anyone who already tends towards good actions, because this might fail half-way, weakening their morality without instilling a new one. Likewise, he is skeptical of any attempt to change the traditions of a society.

This feels too much like trying to have your cake and eat it too. Utilitarianism can be criticized because it is an evangelical ethical system that gives results far from moral intuitions in some cases. From a utilitarian point of view, it is fairly clearly good to have more utilitarians willing to hoover up these counter-intuitive sources of utility. If all you care about are the ends, you want more people to care about the best ends!

If the best way to achieve utilitarian ends wasn’t through utilitarianism, then we’re left with a self-defeating moral system. In trying to defend utilitarianism from the weak critique that it is pushy and evangelical, both in ways that are repugnant to all who engage in cultural or individual ethical relativism and in ways that are repugnant to some moral intuitions, Smart opens it up to the much stronger critique that it is incoherent!

Smart by turns seems to seek to rescue some commonly held moral truths when they conflict with utilitarianism while rejecting others that seem no less contradictory. I can hardly say that he seems keen to show utilitarianism is in fact in harmony with how people normally act – he clearly isn’t. But he also doesn’t always go all (or even part of) the way in choosing utilitarianism over moral intuitions

Near the end of the book, when talking about a thought experiment introduced by one McCloskey, Smart admits that the only utilitarian action is to frame and execute an innocent man, thereby preventing a riot. McCloskey anticipated him, saying: “But as far as I know, only J.J.C. Smart among the contemporary utilitarians is happy to adopt this ‘solution'”.

Smart responds:

Here I must lodge a mild protest. McCloskey’s use of the work ‘happy’ surely makes me look a most reprehensible person. Even in my most utilitarian moods, I am not happy about this consequence of utilitarianism… since any injustice causes misery and so can be justified only as the lesser of two evils, the fewer the situation in which the utilitarian is forced to choose the lesser of two evils, the better he will be pleased.

This is also the man who said (much as I have) that “admittedly utilitarianism does have consequences which are incompatible with the common moral consciousness, but I tended to take the view ‘so much the worse for the common moral consciousness’.”

All this leaves me baffled. Why the strange mixture? Sometimes Smart goes far further than it seems any of his contemporaries would have. Other times, he stops short of what seems to me the truly utilitarian solution.

On the criticism that utilitarianism compels us always in moral action, leaving us no time to relax, he offers two responses. The first is that perhaps people are too unwilling to act and would be better served by being more spurred on. The second is that it may be that relaxing today allows us to do ten times the good tomorrow.

(Personally, I expect the answer is both. Many people could do more than they currently do, while many others risk burnout unless they relax more. There is a reason the law of equal and opposite advice exists. Different people need to hear different things.)

But take this and his support for rules of thumb on one side and his support for executing the innocent man, or long spiel on how a bunch of people wireheading wouldn’t be that bad (a spiel that convinced me, I might add) and I’m left with an unclear overall picture. As an all-is-fine defence of utilitarianism, it doesn’t go far enough. As a bracing lecture about our degenerate non-utilitarian ways, it also doesn’t go far enough.

Leaving, I suppose, the sincere views of a man who pondered utilitarianism for much longer than I have. Chance is the only reason that makes sense. This would imply that sometimes Smart gives a nod to traditional morality because he’s decided it aligns with his utilitarian ethics. Other times, he disagrees. At length. Maybe Smart is a man seeking to rescue what precious moral truths he can from the house fire that is utilitarianism.

Perhaps some of my confusion comes from another confusion, one that seems to have subtly infected many utilitarians. Smart is careful to point out that the atomic belief underlying utilitarianism is general benevolence. Benevolence, note, is not altruism. The individual utilitarian matters just as much – or as little – as everyone else. Utilitarians in Smart’s framework have no obligation to run themselves ragged for another. Trading your happiness for another’s will only ever be an ethically neutral act to the utilitarian.

Or, I suspect, the wrong one. You are best placed to know yourself and best placed to create happiness for yourself. It makes sense to include some sort of bias towards your own happiness to take this into account. Or, if this feels icky to you, you could handle it at the level of probabilities. You are more likely to make yourself happy than someone else (assuming you’ve put some effort towards understanding what makes you happy). If you are 80% likely to make yourself happy for an evening and 60% likely to make someone else happy, your clear utilitarian duty is to yourself.

This is not a suggestion to go become a hermit. Social interactions are very rarely as zero sum as all that. It might be that the best way to make yourself happy is to go help a friend. Or to go to a party with several people you know. But I have seen people risk burnout (and have risked it myself) by assuming it is wrong to take any time for themselves when they have friends in need.

This is all my own thoughts, not Smart’s. For all of his talk of utilitarianism, he offers little advice on how to make it a practically useful system. All too often, Smart retreats to the idea of measuring the total utility of a society or world. This presents a host of problems and begs two important questions.

First, can utility be accurately quantified? Smart tries to show that different ways of measuring utility should be roughly equivalent in qualitative terms, but it is unclear if this follows at a quantitative level. Stability analysis (where you see how sensitive your result is to different starting assumptions) is an important tool for checking the veracity of conclusions in engineering projects. I have a hunch that quantitatively, utilitarian results to many problems will be highly unstable when a variety of forms of utilitarianism are tried.

Second, how should we deal with utility in the future? Smart claims that beyond a certain point we can ignore side effects (as unintended good side effects should cancel out unintended ill side effects; this is especially important when it comes to things like saving lives) but that doesn’t give us any advice on how we can estimate effects.

We are perhaps saved here by the same collapse that aligned normal utilitarians with negative utilitarians. If we cannot quantify joy, we can sure quantify misery. Doctors can tell you just how much quality of life a disease can sap (there are tables for this), not to mention the chances that a disease might end a life outright. We know the rates of absolute poverty, maternal deaths, and malaria prevalence. There is more than enough misery in the world to go around and certainly utilitarians who focus on ending misery do not seem to be at risk of being out an ethical duty any time in the near future.

(If ending misery is important to you, might I suggest donating a fraction of your monthly income to one of GiveWell’s top recommended charities? These are the charities that most effectively use money to reduce suffering. If you care about maximizing your impact, GiveWell is a good way to do it.)

Although speaking of the future, I find it striking how little utilitarianism has changed in the fifty-six years since Smart first wrote his essay. He pauses to comment on the risk of a recursively self-improving AI and talk about the potential future moral battles over factory farming. I’m part of a utilitarian meme group and these are the same topics people joke about every day. It is unclear if these are topics that utilitarianism predisposes people to care about, or if there was some indirect cultural transmission of these concerns over the intervening years.

There are many more gems – and frustrations in Smart’s essay. I can’t cover them all without writing a pale imitation of his words, so I shan’t try any more. As an introduction to the different types of utilitarianism, this essay was better than any other introduction I’ve read, especially because it shows all of the ways that various utilitarian systems fit together.

As a defense of utilitarianism, it is comprehensive and pragmatic. It doesn’t seek to please everyone and doesn’t seek to prove utilitarianism. It lays out the advantages of utilitarianism clearly, in plain language, and shows how the disadvantages are not as great as might be imagined. I can see it being persuasive to anyone considering utilitarianism, although in this it is hampered by its position as the first essay in the collection. Anyone convinced by it must then read through another seventy pages of arguments against utilitarianism, which will perhaps leave them rather less convinced.

As a work of academic philosophy, it’s interesting. There’s almost no meta-ethics or meta-physics here. This is a defense written entirely on its own, without recourse to underlying frameworks that might be separately undermined. Smart’s insistence on laying out his arguments plainly leaves him little room to retreat (except around average vs. total happiness). I’ve always found this a useful type of writing; even when I don’t agree, the ways that I disagree with clearly articulated theses can be illuminating.

It’s a pleasant read. I’ve had mostly good luck reading academic philosophy. This book wasn’t a struggle to wade through and it contained the occasional amusing turn of phrase. Smart is neither dry lecturer nor frothing polemicizer. One is put almost in the mind of a kindly uncle, patiently explaining his way through a complex, but not needlessly complicated subject. I highly recommend reading it and its companion.