Ethics, Philosophy

Utilitarian Virtue Ethics

[4 minure read]

The nagging question that both halves of Utilitarianism for and against left me with is: “can utilitarianism exist without veering off into total assessment?”

Total assessment is the direct comparison of all the consequences of different actions. It is not so much a prediction that an individual can make as it is the providence of an omniscient god. If you cannot perfectly predict all of the future, you cannot perform a total assessment. It’s conceptually useful – whenever a utilitarian is backed into a corner, they can fall on total assessment as their decision-making tool – but it’s practically useless.

Absent total assessment, utilitarians kind of have to make their best guess and go with it. Even my beloved precedent utilitarianism isn’t much help here; precedent utilitarianism focuses on a class of consequences that traditional utilitarianism can miss. It does little to help an individual figure out all of the consequences of their actions.

If it is hard to guess the effects of outcomes, or if this guessing will be prohibitive in terms of time, what is the utilitarian to do? One appealing option is a distinctly utilitarian virtue ethics. This virtue ethics would define a good life as one lived with the virtues that cause you to make optimific decisions.

I think it is possible for such a system to maintain a distinctly utilitarian character and thereby avoid Williams’ prediction that utilitarianism must, if accepted, “usher itself from the scene.”

The first distinct characteristic of a utilitarian virtue ethics would be its heterogeneity. Classical virtue ethics holds that there are a set of virtues that can cause one to live a good life. The utilitarian would instead seek to cultivate the virtues that would cause her to act in an optimific way. These would necessarily be individualized; it may very well be optimific for an ambitious and clever utilitarian to cultivate greed and drive while acquiring a fortune, then cultivate charity while giving it away (see Bill Gates).

There is the obvious danger here that cultivating temporarily anti-utilitarian virtues could lead to permanent values drift. The best countermeasure against this would be a varied community of utilitarians, who would cultivate a variety of virtues and help bind each other to the shared utilitarian cause, helping whenever expediency threatens to pull one away from it.

Next, a utilitarian virtue ethics would treat no virtue as sacred. Honesty, charity, kindness, and bravery – all of these must be conditional on the best outcome. Because the best outcome is hard to determine, they might be good rules of thumb, but the utilitarian must always be prepared to break a moral rule if there is more utility to be had.

Third, the utilitarian would seek to avoid cognitive biases and learn to make decisions quickly. Avoiding cognitive biases increases the chance that rules of thumb will be broken out of genuine utilitarian concern, rather than thinly veiled self-interest. Learning to make decisions quickly helps avoid the wasted time pondering “what is the right thing to do?”

While the traditional virtue ethicist might read the works of the great classical philosophers to better understand virtue, a utilitarian virtue ethicist would focus on learning Fermi estimation, Bayesian statistics, and the works of Daniel Kahneman.

The easiest ways for a utilitarian to fail is to treat the world as it really is are by ignoring the things they cannot measure, or by ignoring truths they find personally uncomfortable. We did not evolve for clear thinking and there is always the risk that we will get ourselves turned around, substituting what is best for us with what is best for the world.

One hang-up I have with this idea is that I just described a bunch of my friends in the rationality and effective altruism communities. How likely is it that this is merely self-serving, instead of the natural endpoint of all of the utilitarian philosophy I’ve been reading?

On one hand, this is a community of utilitarians who are similar to me, so convergence in outputs given the same inputs is more or less expected.

On the other, this could be a classic example of seeing the world how I wish it, rather than it is. “Go hang out with people you already like, doing the things you were already going to do” isn’t much of an ethical ask. Given that the world is in a dire state, it makes sense for utilitarians to be sceptical that their ethical system won’t require much from them.

There could be other problems with this proposal, but I’m not sure that I’m the type of person who could see them. For now, this represents my best attempt to reconcile my utilitarian ethics with the realities of the modern world. But I will be careful. Ease is ever seductive.

Ethics, Literature, Philosophy

Book Review: Utilitarianism for and against (Part 2)

[33 minute read]

Three weeks ago, I reviewed the first half of Utilitarianism for and against. This week I’ll be reviewing the second half, the against side. I should note that I’m a utilitarian and therefore likely to be biased against the arguments presented here. If my criticism is rather thicker than last week, it is not because the author of the second essay is any worse than the first.

The author is one Sir Bernard Williams. According to his Wikipedia, he was a particularly humanistic philosopher in the old Greek mode. He was skeptical of attempts to build an analytical foundation for moral philosophy and of his own prowess in arguments. It seems that he had something pithy or cutting to say about everything, which made him notably cautious of pithy or clever answers. He’s also described as a proto-feminist, although you wouldn’t know it from his writing.

Williams didn’t write his essay out of a rationalist desire to disprove utilitarianism with pure reason (a concept he seemed every bit as sceptical of as Smart was). Instead, Williams wrote this essay because he agrees with Smart that utilitarianism is a “distinctive way of looking at human action and morality”. It’s just that unlike Smart, Williams finds the specific distinctive perspective of utilitarianism often horrible.

Smart anticipated this sort of reaction to his essay. He himself despaired of finding a single ethical system that could please anyone, or even please a single person in all their varied moods.

One of the very first things I noticed in Williams’ essay was the challenge of attacking utilitarianism on its own terms. To convince a principled utilitarian that utilitarianism is a poor choice of ethical system, it is almost always necessary to appeal to the consequences of utilitarianism. This forces any critic to frame their arguments a certain way, a way which might feel unnatural. Or repugnant.

Williams begins his essay proper with (appropriately) a discussion of consequences. He points out that it is difficult to hold actions as valuable purely by their consequences because this forces us to draw arbitrary lines in time and declare the state of the world at that time the “consequences”. After all, consequences continue to unfold forever (or at least, until the heat death of the universe). To have anything to talk about at all Williams decides that it is not quite consequences that consequentialism cares about, but states of affairs.

Utilitarianism is the form of consequentialism that has happiness as its sole important value and seeks to bring about the state of affairs with the most happiness. I like how Williams undid the begging the question that utilitarianism commonly does. He essentially asks ‘why should happiness be the only thing we treat as intrinsically valuable?’ Williams mercifully didn’t drive this home, but I was still left with uncomfortable questions for myself.

Instead he moves on to his first deep observation. You see, if consequentialism was just about valuing certain states of affairs more than others, you could call deontology a form of consequentialism that held that duty was the only intrinsically valuable thing. But that can’t be right, because deontology is clearly different from consequentialism. The distinction, that Williams suggests is that consequentialists discount the possibility of actions holding any inherent moral weight. For a consequentialist, an action is right because it brings about a better state of affairs. For non-consequentialists, a state of affairs can be better – even if it contains less total happiness or integrity or whatever they care about than a counterfactual state of affairs given a different action – because the right action was taken.

A deontologist would say that it is right for someone to do their duty in a way that ends up publically and spectacularly tragic, such that it turns a thousand people off of doing their own duty. A consequentialist who viewed duty as important for the general moral health of society – who, in Smart’s terminology, viewed acting from duty as good – would disagree.

Williams points out that this very emphasis on comparing states of affairs (so natural to me) is particularly consequentialist and utilitarian. That is to say, it is not particularly meaningful for a deontologist or a virtue ethicist to compare states of affairs. Deontologists have no duty to maximize the doing of duty; if you ask a deontologist to choose between a state of affairs that has one hundred people doing their duty and another that has a thousand, it’s not clear that either state is preferable from their point of view. Sure, deontologists think people should do their duty. But duty embodied in actions is the point, not some cosmic tally of duty.

Put as a moral statement, non-consequentialists lack any obligation to bring about more of what they see as morally desirable. A consequentialist may feel both fondness for and a moral imperative to bring about a universe where more people are happy. Non- consequentialists only have the fondness.

One deontologist of my acquaintance said that trying to maximize utility felt pointless – they viewed it as morally important as having a high score on a Tetris game. We ended up starting at each other in blank incomprehension.

In Williams’ view, rejection of consequentialism doesn’t necessarily lead to deontology, though. He sums it up simply as: “all that is involved… in the denial of consequentialism, is that with respect to some type of action, there are some situations in which that would be the right thing to do, even though the state of affairs produced by one’s doing that would be worse than some other state of affairs accessible to one.”

A deontologist will claim right actions must be taken no matter the consequences, but to be non-consequentalist, an ethical system merely has to claim that some actions are right despite a variety of more or less bad consequences that might arise from them.

Or, as I wrote angrily in the margins: “ok, so not necessarily deontology, just accepting sub-maximal global utility“. It is hard to explain to a non-utilitarian just how much this bugs me, but I’m not going to go all rationalist and claim that I have a good reason for this belief.

Williams then turns his attention to the ways in which he thinks utilitarianism’s insistency on quantifying and comparing everything is terrible. Williams believes that by refusing to categorically rule any action out (or worse, specifically trying to come up with situations in which we might do horrific things), utilitarianism encourages people – even non-utilitarians who bump into utilitarian thought experiments – to think of things in utilitarian (that is to say, explicitly comparative) terms. It seems like Williams would prefer there to be actions that are clearly ruled out, not just less likely to be justified.

I get the impression of a man almost tearing out his hair because for him, there exist actions that are wrong under all circumstances and here we are, talking about circumstances in which we’d do them. There’s a kernel of truth here too. I think there can be a sort of bravado in accepting utilitarian conclusions. Yeah, I’m tough enough that I’d kill one to save one thousand? You wouldn’t? I guess you’re just soft and old-fashioned. For someone who cares as much about virtue as I think Williams does, this must be abhorrent.

I loved how Williams summed this up.

The demand… to think the unthinkable is not an unquestionable demand of rationality, set against a cowardly or inert refusal to follow out one’s moral thoughts. Rationality he sees as a demand not merely on him, but on the situations in and about which he has to think; unless the environment reveals minimum sanity, it is insanity to carry the decorum of sanity into it.

For all that I enjoyed the phrasing, I don’t see how this changes anything; there is nothing at all sane about the current world. A life is worth something like $7 million to $9 million and yet can be  saved for less than $5000. This planet contains some of the most wrenching poverty and lavish luxury imaginable, often in the very same cities. Where is the sanity? If Williams thinks sane situations are a reasonable precondition to sane action, then he should see no one on earth with a duty to act sanely.

The next topic Williams covers is responsibility. He starts by with a discussion of agent interchangeability in utilitarianism. Williams believes that utilitarianism merely requires someone do the right thing. This implies that to the utilitarian, there is no meaningful difference between me doing the utilitarian right action and you doing it, unless something about me doing it instead of you leads to a different outcome.

This utter lack of concern for who does what, as long as the right thing gets done doesn’t actually seem to absolve utilitarians of responsibility. Instead, it tends to increase it. Williams says that unlike adherents of many ethical systems, utilitarians have negative responsibilities; they are just as much responsible for the things they don’t do as they are for the things they do. If someone has to and no one else will, then you have to.

This doesn’t strike me as that unique to utilitarianism. I was raised Catholic and can attest that Catholics (who are supposed to follow a form of virtue ethics) have a notion of negative responsibility too. Every mass, as Catholics ask forgiveness before receiving the Eucharist they ask God for forgiveness for their sins, in thoughts and words, in what they have done and in what they have failed to do.

Leaving aside whether the concept of negative responsibility is uniquely utilitarian or not, Williams does see problems with it. Negative responsibility makes so much of what we do dependent on the people around us. You may wish to spend your time quietly growing vegetables, but be unable to do so because you have a particular skill – perhaps even one that you don’t really enjoy doing – that the world desperately needs. Or you may wish never to take a life, yet be confronted with a run-away trolley that can only be diverted from hitting five people by pulling the lever that makes it hit one.

This didn’t really make sense to me as a criticism until I learned that Williams deeply cares about people living authentic lives. In both the cases above, authenticity played no role in the utilitarian calculus. You must do things, perhaps things you find abhorrent, because other people have set up the world such that terrible outcomes would happen if you didn’t.

It seems that Williams might consider it a tragedy for someone feel compelled by their ethical system to do something that is inauthentic. I imagine he views this as about as much of a crying waste of human potential as I view the yearly deaths of 429,000 people due to malaria. For all my personal sympathy for him I am less than sympathetic to a view that gives these the same weight (or treats inauthenticity as the greater tragedy).

Radical authenticity requires us to ignore society. Yes, utilitarianism plops us in the middle of a web of dependencies and a buffeting sea of choices that were not ours, while demanding we make the best out of it all. But our moral philosophies surely are among the things that push us towards an authentic life. Would Williams view it as any worse that someone was pulled from her authentic way of living because she would starve otherwise?

To me, there is a certain authenticity in following your ethical system wherever it leads. I find this authenticity beautiful, but not worthy of moral consideration, except insofar as it affects happiness. Williams finds this authenticity deeply important. But by rejecting consequentialism, he has no real way to argue for more of the qualities he desires, except perhaps as a matter of aesthetics.

It seems incredibly counter-productive to me to say to people – people in the midst of a society that relentlessly pulls them away from authenticity with impersonal market forces – that they should turn away from the one ethical system that seems to have as the desired outcome a happier system. A Kantian has her duty to duty, but as long as she does that, she cares not for the system. A virtue ethicist wishes to be virtuous and authentic, but outside of her little bubble of virtue, the terrors go on unabated. It’s only the utilitarian who can holds a better society as an end into itself.

Maybe this is just me failing to grasp non-utilitarian epistemologies. It baffles me to hear “this thing is good and morally important, but it’s not like we think it’s morally important for there to be more of it; that goes too far!”. Is this a strawman? If someone could explain what Williams is getting at here in terms I can understand, I’d be most grateful.

I do think Williams misses one key thing when discussing the utilitarian response to negative responsibility. Actions should be assessed on the margin, not in isolation. That is to say, the marginal effect of someone becoming a doctor, or undertaking some other career generally considered benevolent is quite low if there are others also willing to do the job. A doctor might personally save hundreds, or even thousands of lives over her career, but her marginal impact will be saving something like 25 lives.

The reasons for this are manifold. First, when there are few doctors, they tend to concentrate on the most immediately life-threatening problems. As you add more and more doctors, they can help, but after a certain point, the supply of doctors will outstrip the demand for urgent life-saving attention. They can certainly help with other tasks, but they will each save fewer lives than the first few doctors.

Second, there is a somewhat fixed supply of doctors. Despite many, many people wishing they could be doctors, only so many can get spots in medical school. Even assuming that medical school admissions departments are perfectly competent at assessing future skill at being a doctor (and no one really believes they are), your decision to attend medical school (and your successful admission) doesn’t result in one extra doctor. It simply means that you were slightly better than the next best person (who would have been admitted if you weren’t).

Finally, when you become a doctor you don’t replace one of the worst already practising doctors. Instead, you replace a retiring doctor who is (for statistical purposes) about average for her cohort.

All of this is to say that utilitarians should judge actions on the margin, not in absolute terms. It isn’t that bad (from a utilitarian perspective) not devote all your attentions to the most effective direct work, because unless a certain project is very constrained by the number of people working on it, you shouldn’t expect to make much marginal difference. On the other hand, earning a lot of money and giving it to highly effective charities (or even a more modest commitment, like donating 10% of your income) is likely to do a huge amount of good, because most people don’t do this, so you’re replacing a person at a high paying job who was doing (from a utilitarian perspective) very little good.

Williams either isn’t familiar with this concept, or omitted it in the interest of time or space.

Williams next topic is remoter effects. A remoter effect is any effect that your actions have on the decision making of other people. For example, if you’re a politician and you lie horribly, are caught, and get re-elected by a large margin, a possible remoter effect is other politicians lying more often.  With the concept of remoter effects, Williams is pointing at what I call second order utilitarianism.

Williams makes a valid point that many of the justifications from remoter effects that utilitarians make are very weak. For example, despite what some utilitarians claim, telling a white lie (or even telling any lie that is unpublicized) doesn’t meaningfully reduce the propensity of everyone in the world to tell the truth.

Williams thinks that many utilitarians get away with claiming remoter effects as justification because they tend to be used as way to make utilitarianism give the common, respectable answers to ethical dilemmas. He thinks people would be much more skeptical of remoter effects if they were ever used to argue for positions that are uncommonly held.

This point about remoter effects was, I think, a necessary precursor to Williams’ next thought experiment. He asks us to imagine a society with two groups, A and B. There are many more members of A than B. Furthermore, members of A are disgusted by the presence (or even the thought of the presence) of members of group B. In this scenario, there has to exist some level of disgust and some ratio between A and B that makes the clear utilitarian best option relocating all members of group B to a different country.

With Williams’ recent reminder that most remoter effects are weaker than we like to think still ringing in my ears, I felt fairly trapped by this dilemma. There are clear remoter effects here: you may lose the ability to advocate against this sort of ethnic cleansing in other countries. Successful, minimally condemned ethnic cleansing could even encourage copy-cats. In the real world, these are might both be valid rejoinders, but for the purposes of this thought experiment, it’s clear these could be nullified (e.g. if we assume few other societies like this one and a large direct utility gain).

The only way out that Williams sees fit to offer us is an obvious trap. What if we claimed that the feelings of group A were entirely irrational and that they should just learn to live with them? Then we wouldn’t be stuck advocating for what is essentially ethnic cleansing. But humans are not rational actors. If we were to ignore all such irrational feelings, then utilitarianism would no longer be a pragmatic ethical system that interacts with the world as it is. Instead, it would involve us interacting with the world as we wish it to be.

Furthermore, it is always a dangerous game to discount other people’s feelings as irrational. The problem with the word irrational (in the vernacular, not utilitarian sense) is that no one really agrees on what is irrational. I have an intuitive sense of what is obviously irrational. But so, alas, do you. These senses may align in some regions (e.g. we both may view it as irrational to be angry because of a belief that the government is controlled by alien lizard-people), but not necessarily in others. For example, you may view my atheism as deeply irrational. I obviously do not.

Williams continues this critique to point out that much of the discomfort that comes from considering – or actually doing – things the utilitarian way comes from our moral intuitions. While Smart and I are content to discount these feelings, Williams is horrified at the thought. To view discomfort from moral intuitions as something outside yourself, as an unpleasant and irrational emotion to be avoided, is – to Williams – akin to losing all sense of moral identity.

This strikes me as more of a problem for rationalist philosophers. If you believe that morality can be rationally determined via the correct application of pure reason, then moral intuitions must be key to that task. From a materialist point of view though, moral intuitions are evolutionary baggage, not signifiers of something deeper.

Still, Williams made me realize that this left me vulnerable to the question “what is the purpose of having morality at all if you discount the feelings that engender morality in most people?”, a question to which I’m at a loss to answer well. All I can say (tautologically) is “it would be bad if there was no morality”; I like morality and want it to keep existing, but I can’t ground it in pure reason or empiricism; no stone tablets have come from the world. Religions are replete with stone tablets and justifications for morality, but they come with metaphysical baggage that I don’t particularly want to carry. Besides, if there was a hell, utilitarians would have to destroy it.

I honestly feel like a lot of my disagreement with Williams comes from our differing positions on the intuitive/systematizing axis. Williams has an intuitive, fluid, and difficult to articulate sense of ethics that isn’t necessarily transferable or even explainable. I have a system that seems workable and like it will lead to better outcomes. But it’s a system and it does have weird, unintuitive corner cases.

Williams talks about how integrity is a key moral stance (I think motivated by his insistence on authenticity). I agree with him as to the instrumental utility of integrity (people won’t want to work with you or help you if you’re an ass or unreliable). But I can’t ascribe integrity some sort of quasi-metaphysical importance or treat it as a terminal value in itself.

In the section on integrity, Williams comes back to negative responsibility. I do really respect Williams’ ability to pepper his work with interesting philosophical observations. When talking about negative responsibility, he mentions that most moral systems acknowledge some difference between allowing an action to happen and causing it yourself.

Williams believes the moral difference between action and inaction is conceptually important, “but it is unclear, both in itself and in its moral applications, and the unclarities are of a kind which precisely cause it to give way when, in very difficult cases, weight has to be put on it”. I am jealous three times over at this line, first at the crystal-clear metaphor, second at the broadly applicable thought underlying the metaphor, and third at the precision of language with which Williams pulls it off.

(I found Williams a less consistent writer than Smart. Smart wrote his entire essay in a tone of affable explanation and managed to inject a shocking amount of simplicity into a complicated subject. Williams frequently confused me – which I feel comfortable blaming at least in part on our vastly different axioms – but he was capable of shockingly resonant turns of phrase.)

I doubt Williams would be comfortable to come down either way on inaction’s equivalence to action. To the great humanist, it must ultimately (I assume) come down to the individual humans and what they authentically believed. Williams here is scoffing at the very idea of trying to systematize this most slippery of distinctions.

For utilitarians, the absence or presence of a distinction is key to figuring out what they must do. Utilitarianism can imply “a boundless obligation… to improve the world”. How a utilitarian undertakes this general project (of utility maximization) will be a function of how she can affect the world, but it cannot, to Williams, ever be the only project anyone undertakes. If it were the only project, underlain by no other projects, then it will, in Williams words, be “vacuous”.

The utilitarian can argue that her general project will not be the only project, because most people aren’t utilitarian and therefore have their own projects going on. Of course, this only gets us so far. Does this imply that the utilitarian should not seek to convince too many others of her philosophy?

What does it even mean for the general utilitarian project to be vacuous? As best I can tell, what Williams means is that if everyone were utilitarian, we’d all care about maximally increasing the utility of the world, but either be clueless where to start or else constantly tripping over each other (imagine, if you can, millions of people going to sub-Saharan Africa to distribute bed nets, all at the same time). The first order projects that Williams believes must underlay a more general project are things like spending times with friends, or making your family happy. Williams also believes that it might be very difficult for anyone to be happy without some of these more personal projects

I would suggest that what each utilitarian should do is what they are best suited for. But I’m not sure if this is coherent without some coordinating body (i.e. a god) ensuring that people are well distributed for all of the projects that need doing. I can also suppose that most people can’t go that far on willpower. That is to say, there are few people who are actually psychologically capable of working to improve the world in a way they don’t enjoy. I’m not sure I have the best answer here, but my current internal justification leans much more on the second answer than the first.

Which is another way of saying that I agree with Williams; I think utilitarianism would be self-defeating if it suggested that the only project anyone should undertake is improving the world generally. I think a salient difference between us is that he seems to think utilitarianism might imply that people should only work on improving the world generally, whereas I do not.

This discussion of projects leads to Williams talking about the hedonic paradox (the observation that you cannot become happy by seeking out pleasures), although Williams doesn’t reference it by name. Here Williams comes dangerously close to a very toxic interpretation of the hedonic paradox.

Williams believes that happiness comes from a variety of projects, not all of which are undertaken for the good of others or even because they’re particularly fun. He points out that few of these projects, if any, are the direct pursuit of happiness and that happiness seems to involve something beyond seeking it. This is all conceptually well and good, but I think it makes happiness seem too mysterious.

I wasted years of my life believing that the hedonic paradox meant that I couldn’t find happiness directly. I thought if I did the things I was supposed to do, even if they made me miserable, I’d find happiness eventually. Whenever I thought of rearranging my life to put my happiness first, I was reminded of the hedonic paradox and desisted. That was all bullshit. You can figure out what activities make you happy and do more of those and be happier.

There is a wide gulf between the hedonic paradox as originally framed (which is purely an observation about pleasures of the flesh) and the hedonic paradox as sometimes used by philosophers (which treats happiness as inherently fleeting and mysterious). I’ve seen plenty of evidence for the first, but absolutely none for the second. With his critique here, I think Williams is arguably shading into the second definition.

This has important implications for the utilitarian. We can agree that for many people, the way to most increase their happiness isn’t to get them blissed out on food, sex, and drugs, without this implying that we will have no opportunities to improve the general happiness. First, we can increase happiness by attacking the sources of misery. Second, we can set up robust institutions that are conducive to happiness. A utilitarian urban planner would perhaps give just as much thought to ensuring there are places where communities can meet and form as she would to ensuring that no one would be forced to live in squalor.

Here’s where Williams gets twisty though. He wanted us to come to the conclusion that a variety of personal projects are necessary for happiness so that he could remind us that utilitarianism’s concept of negative responsibility puts great pressure on an agent not to have her own personal projects beyond the maximization of global happiness. The argument here seems to be (not for the first time) that utilitarianism is self-defeating because it will make everyone miserable if everyone is a utilitarian.

Smart tried to short-circuit arguments like this by pointing out that he wasn’t attempting to “prove” anything about the superiority of utilitarianism, simply presenting it as an ethical system that might be more attractive if it was better understood. Faced with Williams point here, I believe that Smart would say that he doesn’t expect everyone to become utilitarian and that those who do become utilitarian (and stay utilitarian) are those most likely to have important personal projects that are generally beneficent.

I have the pleasure of reading the blogs and Facebook posts of many prominent (for certain unusual values of prominent) utilitarians. They all seem to be enjoying what they do. These are people who enjoy research, or organizing, or presenting, or thought experiments and have found ways to put these vocations to use in the general utilitarian project. Or people who find that they get along well with utilitarians and therefore steer their career to be surrounded by them. This is basically finding ikigai within the context of utilitarian responsibilities.

Image Credit: Emmy van Deurzen via Wikimedia Commons

Saying that utilitarianism will never be popular outside of those suited for it means accepting we don’t have a universal ethical solution. This is, I think, very pragmatic. It also doesn’t rule out utilitarians looking for ways we can encourage people to be more utilitarian. To slightly modify a phrase that utilitarian animal rights activists use: the best utilitarianism is the type you can stick with; it’s better to be utilitarian 95% of the time then it is to be utilitarian 100% of the time – until you get burnt out and give it up forever.

I would also like to add a criticism of Williams’ complaint that utilitarian actions are overly determined by the actions of others. Namely, the status quo certainly isn’t perfect. If we are to reject action because it is not on the projects we would most like to be doing, then we are tacitly endorsing the status quo. Moral decisions cannot be made in a vacuum and the terrain in which we must make moral decisions today is one marked by horrendous suffering, inequality, and unfairness.

The next two sections of Williams’ essay were the most difficult to parse, but also the most rewarding. They deal with the interplay between calculating utilities and utilitarianism and question the extent to which utilitarianism is practical outside of appealing to the idea of total utility. That is to say, they ask if the unique utilitarian ethical frame can, under practical conditions have practical effects.

To get to the meat of Williams points, I had to wade through what at times felt like word games. All of the things he builds up to throughout these lengthy sections begin with a premise made up of two points that Williams thinks are implied by Smart’s essay.

  1. All utilities should be assessed in terms of acts. If we’re talking about rules, governments, or dispositions, their utility stems from the acts they either engender or prevent.
  2. To say that a rule (as an example) has any effect at all, we must say that it results in some change in acts. In Williams’ words: “the total utility effect of a rule’s obtaining must be cashable in terms of the effects of acts.

Together, (1) and (2) make up what Williams calls the “act-adequacy” premise. If the premise is true, there must be no surplus source of utility outside of acts and, as Smart said, rule utilitarianism should (if it is truly concerned with optimific outcomes) collapse to act utilitarianism. This is all well and good when comparing systems as tools of total assessment (e.g. when we take the universe wide view that I criticized Smart for hiding in), but Williams is first interested in how this causes rule and act utilitarianism to relate with actions

If you asked an act-utilitarian and a rule utilitarian “what makes that action right”, they would give different answers. The act utilitarian would say that it is right if it maximizes utility, but the rule utilitarian would say it is right if it is in accordance with rules that tend to maximize utility. Interestingly, if the act-adequacy premise is true, then both act and rule utilitarians would agree as to why certain rules or dispositions are desirable, namely, that actions that results from those rules or dispositions tends to maximize utility.

(Williams also points out that rules, especially formal rules, may derive utility from sources other than just actions following the rule. Other sources of utility include: explaining the rule, thinking about the rule, avoiding the rule, or even breaking the rule.)

But what to do we do when actually faced with the actions that follow from a rule or disposition? Smart has already pointed out that we should praise or blame based on the utility of the praise/blame, not on the rightness or wrongness of the action we might be praising.

In Williams’ view, there are two problems with this. First, it is not a very open system. If you knew someone was praising or blaming you out of a desire to manipulate your future actions and not in direct relation to their actual opinion of your past actions, you might be less likely to accept that praise or blame. Therefore, it could very well be necessary for the utilitarian to hide why acts are being called good or bad (and therefore the reasons why they praise or blame).

The second problem is how this suggests utilitarians should stand with themselves. Williams acknowledges that utilitarians in general try not to cry over spilt milk (“[this] carries the characteristically utilitarian thought that anything you might want to cry over is, like milk, replaceable”), but argues that utilitarianism replaces the question of “did I do the right thing?” with “what is the right thing to do?” in a way that may not be conducive to virtuous thought.

(Would a utilitarian Judas have lived to old age contentedly, happy that he had played a role in humankind’s eternal salvation?)

The answer to “what is the right thing to do?” is of course (to the utilitarian) “that which has the best consequences”. Except “what is the right thing to do?” isn’t actually the right question to ask if you’re truly concerned with the best consequences. In that case, the question is “if asking this question is the right thing to do, what actions have the best consequences?”

Remember, Smart tried to claim that utilitarianism was to only be used for deliberative actions. But it is unclear which actions are the right ones to take as deliberative, especially a priori. Sometimes you will waste time deliberating, time that in the optimal case you would have spent on good works. Other times, you will jump into acting and do the wrong thing.

The difference between act (direct) and rule (indirect) utilitarianism therefore comes to a question of motivation vs. justification. Can a direct utilitarian use “the greatest total good” as a motivation if they do not know if even asking the question “what will lead to the greatest total good?” will lead to it? Can it only ever be a justification? The indirect utilitarian can be motivated by following a rule and justify her actions by claiming that generally followed, the rule leads to the greatest good, but it is unclear what recourse (to any direct motivation for a specific action) the direct utilitarian has.

Essentially, adopting act utilitarianism requires you to accept that because you have accepted act utilitarianism you will sometimes do the wrong thing. It might be that you think that you have a fairly good rule of thumb for deliberating, such that this is still the best of your options to take (and that would be my defense), but there is something deeply unsettling and somewhat paradoxical about this consequence.

Williams makes it clear that the bad outcomes here aren’t just loss of an agent’s time. This is similar in principle to how we calculate the total utility of promulgating a rule. We accept that the total effects of the promulgation must include the utility or disutility that stems from avoiding it or breaking it, in addition to the utility or disutility of following. When looking at the costs of deliberation, we should also include the disutility that will sometimes come when we act deliberately in a way that is less optimific than we would have acted had we spontaneously acted in accordance with our disposition or moral intuitions.

This is all in the case where the act-adequacy premise is true. If it isn’t, the situation is more complex. What if some important utility of actions comes from the mood they’re done in, or in them being done spontaneously? Moods may be engineered, but it is exceedingly hard to engineer spontaneity. If the act-adequacy premise is false, then it may not hold that the (utilitarian) best world is one in which right acts are maximized. In the absence of the act-adequacy premise it is possible (although not necessarily likely) that the maximally happy world is one in which few people are motivated by utilitarian concerns.

Even if the act-adequacy premise holds, we may be unable to know if our actions are at all right or wrong (again complicating the question of motivation).

Williams presents a thought experiment to demonstrate this point. Imagine a utilitarian society that noticed its younger members were liable to stray from the path of utilitarianism. This society might set up a Truman Show-esque “reservation” of non-utilitarians, with the worst consequences of their non-utilitarian morality broadcasted for all to see. The youth wouldn’t stray and the utility of the society would be increased (for now, let’s beg the question of utilitarianism as a lived philosophy being optimific).

Here, the actions of the non-utilitarian holdouts would be right; on this both utilitarians (looking from a far enough remove) and the subjects themselves would agree. But this whole thing only works if the viewers think (incorrectly) that the actions they are seeing are wrong.

From the global utilitarian perspective, it might even be wrong for any of the holdouts to become utilitarian (even if utilitarianism was generally the best ethical system). If the number of viewers is large enough and the effect of one fewer irrational holdout is strong enough (this is a thought experiment, so we can fiddle around with the numbers such that this is indeed true), the conversion of a hold-out to utilitarianism would be really bad.

Basically, it seems possible for there to be a large difference between the correct action as chosen by the individual utilitarian with all the knowledge she has and the correct action as chosen from the perspective of an omniscient observer. From the “total assessment” perspective, it is even possible that it would be best that there be no utilitarians.

Williams points out that many of the qualities we value and derive happiness from (stubborn grit, loyalty, bravery, honour) are not well aligned with utilitarianism. When we talked about ethnic cleansing earlier, we acknowledged that utilitarianism cannot distinguish between preferences people have and the preferences people should have; both are equally valid. With all that said, there’s a risk of resolving the tension between non-utilitarian preferences and the joy these preferences can bring people by trying to shape the world not towards maximum happiness, but towards the happiness easiest to measure and most comfortable to utilitarians.

Utilitarianism could also lead to disutility because of the game theoretic consequences. On international projects or projects between large groups of people, sanctioning other actors must always be an option. Without sanctioning, the risk of defection is simply too high in many practical cases. But utilitarians are uniquely compelled to sanction (or else surrender).

If there is another group acting in an uncooperative or anti-utilitarian matter, the utilitarians must apply the least terrible sanction that will still be effective (as the utility of those they’re sanctioning still matters). The other group will of course know this and have every incentive to commit to making any conflict arising from the sanction so terrible as to make any sanctioning wrong from a utilitarian point of view. Utilitarians now must call the bluff (and risk horrible escalating conflict), or else abandon the endeavour.

This is in essence a prisoner’s dilemma. If the non-utilitarians carry on without being sanctioned, or if they change their behaviour in response to sanctions without escalation, everyone will be better off (then in the alternative). But if utilitarians call the bluff and find it was not a bluff, then the results could be catastrophic.

Williams seems to believe that utilitarians will never include an adequate fudge factor for the dangers of mutual defecting. He doesn’t suggest pacifism as an alternative, but he does believe that violent sanctioning should always be used at a threshold far beyond where he assesses the simple utilitarian one to lie.

This position might be more of a historical one, in reaction to the efficiency, order, and domination obsessed Soviet Communism (and its Western fellow travelers), who tended towards utilitarian justifications. All of the utilitarians I know are committed classical liberals (indeed, it sometimes seems to me that only utilitarians are classic liberals these days). It’s unclear if Williams’ criticism can be meaningfully applied to utilitarians who have internalized the severe detriments of escalating violence.

While it seems possible to produce a thought experiment where even such committed second order utilitarians would use the wrong amount of violence or sanction too early, this seems unlikely to come up in a practical context – especially considering that many of the groups most keen on using violence early and often these days aren’t in fact utilitarian. Instead it’s members of both the extreme left and right, who have independently – in an amusing case of horseshoe theory – adopted a morality based around defending their tribe at all costs. This sort of highly local morality is anathema to utilitarians.

Williams didn’t anticipate this shift. I can’t see why he shouldn’t have. Utilitarians are ever pragmatic and (should) understand that utilitarianism isn’t served by starting horrendous wars willy-nilly.

Then again, perhaps this is another harbinger of what Williams calls “utilitarianism ushering itself from the scene”. He believes that the practical problems of utilitarian ethics (from the perspective of an agent) will move utilitarianism more and more towards a system of total assessment. Here utilitarianism may demand certain things in the way of dispositions or virtues and certainly it will ask that the utility of the world be ever increased, but it will lose its distinctive character as a system that suggests actions be chosen in such a way as to maximize utility.

Williams calls this the transcendental viewpoint and pithily asks “if… utilitarianism has to vanish from making any distinctive mark in the world, being left only with the total assessment from the transcendental standpoint – then I leave if for discussion whether that shows that utilitarianism is unacceptable or merely that no one ought to accept it.”

This, I think, ignores the possibility that it might become easier in the future to calculate the utility of certain actions. The results of actions are inherently chaotic and difficult to judge, but then, so is the weather. Weather prediction has been made tractable by the application of vast computational power. Why not morality? Certainly, this can’t be impossible to envision. Iain M. Banks wrote a whole series of books about it!

Of course, if we wish to be utilitarian on a societal level, we must currently do so without the support of godlike AI. Which is what utilitarianism was invented for in the first place. Here it was attractive because it is minimally committed – it has no elaborate theological or philosophical commitments buttressing it, unlike contemporaneous systems (like Lockean natural law). There is something intuitive about the suggestion that a government should only be concerned for the welfare of the governed.

Sure, utilitarianism makes no demands on secondary principles, Williams writes, but it is extraordinarily demanding when it comes to empirical information. Utilitarianism requires clear, comprehensible, and non-cyclic preferences. For any glib rejoinders about mere implementation details, Williams has this to say:

[These problems are] seen in the light of a technical or practical difficulty and utilitarianism appeals to a frame of mind in which technical difficulty, even insuperable technical difficulty, is preferable to moral unclarity, no doubt because it is less alarming.

Williams suggests that the simplicity of utilitarianism isn’t a virtue, only indicative of “how little of the world’s luggage it is prepared to pick up”. By being immune to concerns of justice or fairness (except insofar as they are instrumentally useful to utilitarian ends), Williams believes that utilitarianism fails at many of the tasks that people desire from a government.

Personally, I’m not so sure a government commitment to fairness or justice is at all illuminating. There are currently at least two competing (and mutually exclusive) definitions of both fairness and justice in political discourse.

Should fairness be about giving everyone the same things? Or should it be about giving everyone the tools they need to have the same shot at meaningful (of course noting that meaningful is a societal construct) outcomes? Should justice mean taking into account mitigating factors and aiming for reconciliation? Or should it mean doing whatever is necessary to make recompense to the victim?

It is too easy to use fairness or justice as a sword without stopping to assess who it aimed at and what the consequences of the aim is (says the committed consequentialist). Fairness and justice are meaty topics that deserve better than to be thrown around as a platitudinous counterargument to utilitarianism.

A much better critique of utilitarian government can be made by imagining how such a government would respond to non-utilitarian concerns. Would it ignore them? Or would it seek to direct its citizens to have only non-utilitarian concerns? The latter idea seems practically impossible. The first raises important questions.

Imagine a government that is minimally responsive to non-utilitarian concerns. It primarily concerns itself with maximizing utility, but accepts the occasional non-utilitarian decision as the cost it must pay to remain in power (presume that the opposition is not utilitarian and would be very responsive to non-utilitarian concerns in a way that would reduce the global utility). This government must necessarily look very different to the utilitarian elite who understand what is going on and the masses who might be quite upset that the government feels obligated to ignore many of their dearly held concerns.

Could such an arrangement exist with a free media? With free elections? Democracies are notably less corrupt than autocracies, so there are significant advantages to having free elections and free media. But how, if those exist, does the utilitarian government propose to keep its secrets hidden from the population? And if the government was successful, how could it respect its citizens, so duped?

In addition to all that, there is the problem of calculating how to satisfy people’s preferences. Williams identifies three problems here:

  1. How do you measure individual welfare?
  2. To what extent is welfare comparative?
  3. How do you develop the aggregate social preference given the answer to the proceeding two questions?

Williams seems to suggest that a naïve utilitarian approach involves what I’ve think is best summed up in a sick parody of Marx: from each according to how little they’ll miss it, to each according to how much they desire it. Surely there cannot be a worse incentive structure imaginable than the one naïve utilitarianism suggests?

When dealing with preferences, it is also the case that utilitarianism makes no distinction between fixing inequitable distributions that cause discontent or – as observed in America – convincing those affected by inequitable distributions not to feel discontent.

More problems arise around substitution or compensation. It may be more optimific for a roadway to be built one way than another and it may be more optimific for compensation to be offered to those who are affected, but it is unclear that the compensation will be at all worth it for those affected (to claim it would be, Williams declares, is “simply an extension of the dogma that every man has his price”). This is certainly hard for me to think about, even (or perhaps especially) because the common utilitarian response is a shrug – global utility must be maximized, after all.

Utilitarianism is about trade-offs. And some people have views which they hold to be beyond all trade-off. It is even possible for happiness to be buttressed or rest entirely upon principles – principles that when dearly and truly held cannot be traded-off against. Certainly, utilitarians can attempt to work around this – if such people are a minority, they will be happily trammelled by a utilitarian majority. But it is unclear what a utilitarian government could do in a such a case where the majority of their population is “afflicted” with deeply held non-utilitarian principles.

Williams sums this up as:

Perhaps humanity is not yet domesticated enough to confine itself to preferences which utilitarianism can handle without contradiction. If so, perhaps utilitarianism should lope off from an unprepared mankind to deal with problems it finds more tractable – such as that present by Smart… of a world which consists only of a solitary deluded sadist.

Finally, there’s the problem of people being terrible judges of what they want, or simply not understanding the effects of their preferences (as the Americas who rely on the ACA but want Obamacare to be repealed may find out). It is certainly hard to walk the line between respecting preferences people would have if they were better informed or truly understood the consequences of their desires and the common (leftist?) fallacy of assuming that everyone who held all of the information you have must necessarily have the same beliefs as you.

All of this combines to make Williams view utilitarianism as dangerously irresponsible as a system of public decision making. It assumes that preferences exist, that the method of collecting them doesn’t fail to capture meaningful preferences, that these preferences would be vindicated if implemented, and that there’s a way to trade-off among all preferences.

To the potential utilitarian rejoinder that half a loaf is better than none, he points out a partial version of utilitarianism is very vulnerable to the streetlight effect. It might be used where it can and therefore act to legitimize – as “real”– concerns in the areas where it can be used and delegitimize those where it is unsuitable. This can easily lead to the McNamara fallacy; deliberate ignorance of everything that cannot be quantified:

The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.

— Daniel Yankelovich “Corporate Priorities: A continuing study of the new demands on business.” (1972)

This isn’t even to mention something that every serious student of economics knows: that when dealing with complicated, idealized systems, it is not necessarily the non-ideal system that is closest to the ideal (out of all possible non-ideal systems) that has the most benefits of the ideal. Economists call this the “theory of the second best”. Perhaps ethicists might call it “common sense” when applied to their domain?

Williams ultimately doubts that systematic though is at all capable of dealing with the myriad complexities of political (and moral) life. He describes utilitarianism as “having too few thoughts and feelings to match the world as it really is”.

I disagree. Utilitarianism is hard, certainly. We do not agree on what happiness is, or how to determine which actions will most likely bring it, fine. Much of this comes from our messy inbuilt intuitions, intuitions that are not suited for the world as it now is. If utilitarianism is simple minded, surely every other moral system (or lack of system) must be as well.

In many ways, Williams did shake my faith in utilitarianism – making this an effective and worthwhile essay. He taught me to be fearful of eliminating from consideration all joys but those that the utilitarian can track. He drove me to question how one can advocate for any ethical system at all, denied the twin crutches of rationalism and theology. And he further shook my faith in individuals being able to do most aspects of the utilitarian moral calculus. I think I’ll have more to say on that last point in the future.

But by their actions you shall know the righteous. Utilitarians are currently at the forefront of global poverty reduction, disease eradication, animal suffering alleviation, and existential risk mitigation. What complexities of the world has every other ethical system missed to leave these critical tasks largely to utilitarians?

Williams gave me no answer to this. For all his beliefs that utilitarianism will have dire consequences when implemented, he has no proof to hand. And ultimately, consequences are what you need to convince a consequentialist.

Ethics, Literature, Philosophy

Book Review: Utilitarianism for and against (Part 1)

Utilitarianism for and against is an interesting little book. It’s comprised of back-to-back ~70 page essays, one in favour of utilitarianism and one opposed. As an overview, it’s hard to beat something like this. You don’t have to rely on one scholar to give you her (ostensibly fair and balanced) opinion; you get two articulate philosophers arguing their side as best they can. Fair and balanced is by necessity left as an exercise to the reader (honestly, it always is; here at least it’s explicit).

I’m going to cover the “for” side first. The “against” side will be in later blog post. Both reviews are going to assume that you have some understanding of utilitarianism. If you don’t, go read my primer. Or be prepared to Google. I should also mention that I have no aspirations of being balanced myself. I’m a utilitarian; I had much more to disagree with on the “against” side than on the “for” side.

Professor J.J.C Smart makes the arguments in favour of utilitarianism. According to his Wikipedia entry, he was known for “outsmarting” his opponents, that is to say, accepting the conclusions of their reductio ad absurdum arguments with nary a shrug. He was, I’ve gathered, not one for moral intuitions. His criticism of rule utilitarianism played a role in its decline and he was influential in raising the next crop of Australian utilitarians, among whom Peter Singer is counted. As near as I can tell, he was one of the more notable defenders of utilitarianism when this volume was published in 1971 (although much of his essay dates back a decade earlier).

Smart is emphatically not a rationalist (in the philosophical sense); he writes no “proof of utilitarianism” and denies that such a proof is even possible. Instead, Smart restricts himself to explaining how utilitarianism is an attractive ethical system for anyone possessed of general benevolence. Well, I’ll say “everyone”. The authors of this volume seem to be labouring under the delusion that only men have ethical dilemmas or the need for ethical systems. Neither one of them manages the ethicist’s coup of realizing that women might be viewed as full people at the remove of half a century from their time of writing (such a coup would perhaps have been strong evidence of the superiority of one philosophy over another).

A lot of Smart’s essay consists of showing how various different types of utilitarianism are all the same under the hood. I’ve termed these “collapses”, although “isomorphisms” might be a better term. There are six collapses in all.

The very first collapse put me to mind of the famous adage about ducks. If it walks like a duck, swims like a duck, and quacks like a duck, it is a duck. By the same token, if someone acts exactly how a utilitarian in their position and with their information would act, then it doesn’t matter if they are a utilitarian or not. From the point of view of an ethical system that cares only about consequences they may as well be.

The next collapse deals with rule utilitarianism and may have a lot to do with its philosophical collapse. Smart points out that if you are avoiding “rule worship”, then you will face a quandary when you could break a rule in such a way as to gain more utility. Rule utilitarians sometimes claim that you just need rules with lots of exceptions and special cases. Smart points out that if you carry this through to its logical conclusion, you really are only left with one rule, the meta-rule of “maximize expected utility”. In this way, rule utilitarianism collapses into act utilitarianism.

Next into the compactor is the difference between ideal and hedonic utilitarians. Briefly, ideal utilitarians hold that some states of mind are inherently valuable (in a utilitarian sense), even if they aren’t particularly pleasant from the inside. “Better Socrates dissatisfied than a fool satisfied” is the rallying cry of ideal utilitarians. Hedonic utilitarians have no terminal values beyond happiness; they would gladly let almost the entirety of the human race wirehead.

Smart claims that while these differences are philosophically large, they are practically much less meaningful. Here Smart introduces the idea of the fecundity of a pleasure. A doctor taking joy (or grim satisfaction) in saving a life is a much more fecund pleasure than a gambler’s excitement at a good throw, because it brings about greater joy once you take into account everyone around the actor. Many of the other pleasures (like writing or other intellectual pursuits) that ideal utilitarians value are similarly fecund. They either lead to abatement of suffering (the intellectual pursuits of scientists) or to many people’s pleasure (the labour of the poet). Taking into account fecundity, it was better for Smart to write this essay than to wirehead himself, because many other people – like me – get to enjoy his writing and have fun thinking over the thorny issues he raises.

Smart could have stood to examine at greater length just why ideal utilitarians value the things they do. I think there’s a decent case to be made that societies figure out ways to value certain (likely fecund) pleasures all on their own, no philosophers required. It is not, I think, that ideal utilitarians have stumbled onto certain higher pleasures that they should coax their societies into valuing. Instead, their societies have inculcated them with a set of valued activities, which, due to cultural evolution, happen to line up well with fecund pleasures. This is why it feels difficult to argue with the list of pleasures ideal utilitarians proffer; it’s not that they’ve stumbled onto deep philosophical truths via reason alone, it’s that we have the same inculcations they do.

Beyond simple fecundity though, there is the fact that the choice between Socrates dissatisfied and a fool satisfied rarely comes up. Smart has a great line about this:

But even the most avid television addict probably enjoys solving practical problems connected with his car, his furniture, or his garden. However unintellectual he might be, he would certainly resist the suggestion the he should, if it were possible, change places with a contented sheep, or even a happy and lively dog.

This boils down to: ‘ideal utilitarians assume they’re a lot better than everyone else, what with their “philosophical pursuits”, but most people don’t want purely mindless pleasures’. Combined, these ideas of fecundity and hidden depths, point to a vanishingly small gap between ideal and hedonistic utilitarians, especially compared to the gap between utilitarians and practitioners of other ethical systems.

After dealing with questions of how highly we should weigh some pleasures, Smart turns to address the idea of some pleasures not counting at all. Take, for example, the pleasure that a sadist takes in torturing a victim. Should we count this pleasure in our utilitarian moral calculus? Smart says yes, for reasons that again boil down to “certain pleasures being viewed as bad are an artifact of culture; no pleasure is intrinsically bad.

(Note however that this isn’t the same thing as Smart condoning the torture. He would say that the torture is wrong because the pleasure the sadist gains from it cannot make up for the distress of the victim. Given that no one has ever found a real live utility monster, this seems a safe position to take.)

In service of this, Smart presents a thought experiment. Imagine a barren universe inhabited by a single sentient being. This sentient being wrongly believes that there are many other inhabitants of the universe being gruesomely tortured and takes great pleasure in this thought. Would the universe be better if the being didn’t derive pleasure from her misapprehension?

The answer here for both Smart and me is no (although I suspect many might disagree with us). Smart reasons (almost tautologically) that since there is no one for this being to hurt, her predilection for torture can’t hurt anyone. We are rightfully wary of people who unselfconsciously enjoy the thought of innocents being tortured because of what it says about what their hobbies might be. But if they cannot hurt anyone, their obsession is literally harmless. This bleak world would not be better served by its single sentient inhabitant quailing at the thought of the imaginary torture.

Of course, there’s a wide gap between the inhabitant curled up in a ball mourning the torture she wrongly believes to be ongoing and her simple ambivalence to it. It seems plausible that many people could consider her ambivalence preferable, even if they did not wish her to be sad. But imagine then the difference being between her lonely and bored and her satisfied and happy (leaving aside for a moment the torture). It is clear here which is the better universe. Given a way to move from the universe with a single bored being to the one with a single fulfilled being, shouldn’t we take it, given that the shift most literally harms no one?

This brings us to the distinction between intrinsically bad pleasures and extrinsically bad pleasures – the flip side of the intrinsically more valuable states of mind of the ideal utilitarian. Intrinsically bad pleasures are pleasures that for some rationalist or metaphysical reason are just wrong. Their rightness or wrongness must of course be vulnerable to attacks on the underlying logic or theology, but I can hardly embark on a survey of common objections to all the common underpinnings; I haven’t the time. But many people have undertaken those critiques and many will in the future, making a belief in intrinsically bad pleasures a most unstable place to stand.

Extrinsically bad pleasures seem like a much safer proposition (and much more convenient to the utilitarian who wishes to keep their ethical system free of meta-physical or meta-ethical baggage). To say that a pleasure is extrinsically bad is simply to say that to enjoy it causes so much misery that it will practically never be moral to experience it. Similar to how I described ideal utilitarian values as heavily culturally influenced, I can’t help but feel that seeing some pleasures as intrinsically bad has to be the result of some cultural conditioning.

If we can accept that certain pleasures are not intrinsically good or ill, but that many pleasures that are thought of as intrinsically good or ill are thought so because of long cultural experience – positive or negative – with the consequences of seeking them out, then we should see the position of utilitarians who believe that some pleasures cannot be counted in the plus column collapse to approximately the same as those who hold that they can, even if neither accepts the position of the other. The utilitarian who refuses to believe in intrinsically bad pleasures should still condemn most of the same actions as one who does, because she knows that these pleasures will be outweighed by the pains they inflict on others (like the pain of the torture victim overwhelming the joy of the torturer).

There is a further advantage to holding that pleasures cannot be intrinsically wrong. If we accept the post-modernists adage that knowledge is created culturally, we will remember to be skeptical of the universality of our knowledge. That is to say, if you hold a list of intrinsically bad pleasures, it will probably not be an exhaustive list and there may be pleasures whose ill-effects you overlook because you are culturally conditioned to overlook them. A more thoughtful utilitarian who doesn’t take the short-cut of deeming some pleasures intrinsically bad can catch these consequences and correctly advocate against these ultimately wrong actions.

The penultimate collapse is perhaps the least well supported by arguments. In a scant page, Smart addresses the differences between total and average happiness in a most unsatisfactory fashion. He asks which of two universes you might prefer: one with one million happy, healthy people, or one with twice as many people, equally happy and healthy. Both Smart and I feel drawn to the larger universe, but he has no arguments for people who prefer the smaller. Smart skips over the difficulties here with an airy statement of “often the best way to increase the average happiness is to increase the total happiness and vice versa”.

I’m not entirely sure this statement is true. How would one go about proving it?

Certainly, average happiness seems to miss out on the (to me) obvious good that you’d get if you could have twice as many happy people (which is clearly one case where they give different answers), but like Smart, I have trouble coming up with a persuasive argument why that is obviously good.

I do have one important thing myself to say about the difference between average and total happiness. When I imagine a world with more people who are on average less happy than the people that currently exist (but collectively experience a greater total happiness) I feel an internal flinch.

Unfortunately for my moral intuitions, I feel the exact same flinch when I image a world with many fewer people, who are on average transcendentally happy. We can fiddle with the math to make this scenario come out to have greater average and total happiness than the current world. Doesn’t matter. Exact same flinch.

This leads me to believe that my moral intuitions have a strong status quo bias. The presence of a status quo bias in itself isn’t an argument for either total or average utilitarianism, but it is a reminder to be intensely skeptical of our response to thought experiments that involve changing the status quo and even to be wary of the order that options are presented in.

The final collapse Smart introduces is that between regular utilitarians and negative utilitarians. Negative utilitarians believe that only suffering is morally relevant and that the most important moral actions are those that have the consequence of reducing suffering. Smart points out that you can raise both the total and average happiness of a population by reducing suffering and furthermore that there is widespread agreement on what reduces suffering. So Smart expects utilitarians of all kinds (including negative) to primarily focus on reducing suffering anyway. Basically, despite the profound philosophical differences between regular and negative utilitarians, we should expect them to behave equivalently. Which, by the very first collapse (if it walks like a duck…), shows that we can treat them as philosophical equivalents, at least in the present world.

In my experience, this is more or less true. Many of the negative utilitarians I am aware of mainly exercise their ethics by donating 10% of their income to GiveWell’s most effective charities. The regular utilitarians… do the exact same. Quack.

As far as I can tell, Smart goes to all this work to show how many forms of utilitarianism collapse together so that he can present a system that isn’t at war with itself. Being able to portray utilitarianism as a simple, unified system (despite the many ways of doing it) heads off many simple criticisms.

While I doubt many people avoided utilitarianism because there are lingering questions about total versus average happiness, per se, these little things add up. Saying “yes, there are a bunch of little implementation details that aren’t agreed upon” is a bad start to an ethical system, unless you can immediately follow it up with “but here’s fifty pages of why that doesn’t matter and you can just do what comes naturally to you (under the aegis of utilitarianism)”.

Let’s talk a bit about what comes naturally to people outside the context of different forms of utilitarianism. No one, not even Smart, sits down and does utilitarian calculus before making every little decision. For most tasks, we can ignore the ethical considerations (e.g. there is broad, although probably not universal agreement that there aren’t hidden moral dimensions to opening a door). For some others, our instincts are good enough. Should you thank the woman at the grocery store checkout? You probably will automatically, without pausing to consider if it will increase the total (or average) happiness of the world.

Like in the case of thanking random service industry workers, there are a variety of cases where we actually have pretty good rules of thumb. These rules of thumbs serve two purposes. First, they allow us to avoid spending all of our time contemplating if our actions are right or wrong, freeing us to actually act. Second, they protect us from doing bad things out of pettiness or venality. If you have a strong rule of thumb that violence is an inappropriate response to speech you disagree with, you’re less likely to talk yourself into punching an odious speaker in the face when confronted with them.

It’s obviously important to pick the right heuristics. You want to pick the ones that most often lead towards the right outcomes.

I say “heuristics” and “rules of thumbs” because the thing about utilitarians and rules is that they always have to be prepared to break them. Rules exist for the common cases. Utilitarians have to be on guard for the uncommon cases, the ones where breaking a rule leads to greater good overall. Having a “don’t cause people to die” rule is all well and good. But you need to be prepared to break it if you can only stop mass death from a runaway trolley by pushing an appropriately sized person in front of it.

Smart seems to think that utilitarianism only comes up for deliberative actions, where you take the time to think about them and that it shouldn’t necessarily cover your habits. This seems like an abrogation to me. Shouldn’t a clever utilitarian, realizing that she only uses utilitarianism for big decisions spend some time training her reflexes to more often give the correct utilitarian solution, while also training herself to be more careful of her rules of thumb and think ethically more often? Smart gave no indication that he thinks this is the case.

The discussion of rules gives Smart the opportunity to introduce a utilitarian vocabulary. An action is right if it is the one that maximizes expected happiness (crucially, this is a summation across many probabilities and isn’t necessarily the action that will maximize the chance of the happiest outcome) and wrong otherwise. An action is rational if a logical being in possession of all the information you possess would think you to be right if you did it. All other actions are irrational. A rule of thumb, disposition, or action is good if it tends to lead to the right outcomes and bad if it tends to lead to the wrong ones.

This vocabulary becomes important when Smart talks about praise, which he believes is an important utilitarian concern in its own right. Praise increases people’s propensity towards certain actions or dispositions, so Smart believes a utilitarian aught to consider if the world would be better served by more of the same before she praises anything. This leads to Smart suggesting that utilitarians should praise actions that are good or rational even if they aren’t right.

It also implies that utilitarians doing the right thing must be open to criticism if it requires bad actions. One example Smart gives is a utilitarian Frenchman cheating on wartime rationing in 1940s England. The Frenchman knows that the Brits are too patriotic to cheat, so his action (and the actions of the few others that cheat) will probably fall below the threshold for causing any real harm, while making him (and the other cheaters) happier. The calculus comes out positive and the Frenchman believes it to be the right action. Smart acknowledges that this logic is correct, but he points out that by the similar logic, the Frenchman should agree that he must be severely punished if caught, so as to discourage others from doing the same thing.

This actually reminds me of something Hannah Arendt brushed up against in Eichmann in Jerusalem while talking about how the moral constraints on people are different than the ones on states. She gives the example of Soghomon Tehlirian, the Armenian exile who assassinated one of the triumvirate of Turkish generals responsible for the Armenian genocide. Arendt believes that it would have been wrong for the Armenian government to assassinate the general (had one even existed at the time), but that it was right for a private citizen to do the deed, especially given that Tehlirian did not seek to hide his crimes or resist arrest.

From a utilitarian point of view, the argument would go something like this: political assassinations are bad, in that they tend to cause upheaval, chaos, and ultimately suffering. On the other hand, there are some leaders who the world would clearly be better off without, if not to stop their ill deeds in their tracks, then to strike fear and moderation into the hearts of similar leaders.

Were the government of any country to carry out these assassinations, it would undermine the government’s ability to police murder. But when a private individual does the deed and then immediately gives herself up into the waiting arms of justice, the utility of the world is increased. If she has erred in picking her target and no one finds the assassination justified, then she will be promptly punished, disincentivizing copy-cats. If instead, like Tehlirian, she is found not guilty, it will only be because the crimes committed by the leader she assassinated were so brutal and clear that no reasonable person could countenance them. This too sends a signal.

That said, I think Smart takes his distinctions between right and good a bit too far. He cautions against trying to change the non-utilitarian morality of anyone who already tends towards good actions, because this might fail half-way, weakening their morality without instilling a new one. Likewise, he is skeptical of any attempt to change the traditions of a society.

This feels too much like trying to have your cake and eat it too. Utilitarianism can be criticized because it is an evangelical ethical system that gives results far from moral intuitions in some cases. From a utilitarian point of view, it is fairly clearly good to have more utilitarians willing to hoover up these counter-intuitive sources of utility. If all you care about are the ends, you want more people to care about the best ends!

If the best way to achieve utilitarian ends wasn’t through utilitarianism, then we’re left with a self-defeating moral system. In trying to defend utilitarianism from the weak critique that it is pushy and evangelical, both in ways that are repugnant to all who engage in cultural or individual ethical relativism and in ways that are repugnant to some moral intuitions, Smart opens it up to the much stronger critique that it is incoherent!

Smart by turns seems to seek to rescue some commonly held moral truths when they conflict with utilitarianism while rejecting others that seem no less contradictory. I can hardly say that he seems keen to show utilitarianism is in fact in harmony with how people normally act – he clearly isn’t. But he also doesn’t always go all (or even part of) the way in choosing utilitarianism over moral intuitions

Near the end of the book, when talking about a thought experiment introduced by one McCloskey, Smart admits that the only utilitarian action is to frame and execute an innocent man, thereby preventing a riot. McCloskey anticipated him, saying: “But as far as I know, only J.J.C. Smart among the contemporary utilitarians is happy to adopt this ‘solution'”.

Smart responds:

Here I must lodge a mild protest. McCloskey’s use of the work ‘happy’ surely makes me look a most reprehensible person. Even in my most utilitarian moods, I am not happy about this consequence of utilitarianism… since any injustice causes misery and so can be justified only as the lesser of two evils, the fewer the situation in which the utilitarian is forced to choose the lesser of two evils, the better he will be pleased.

This is also the man who said (much as I have) that “admittedly utilitarianism does have consequences which are incompatible with the common moral consciousness, but I tended to take the view ‘so much the worse for the common moral consciousness’.”

All this leaves me baffled. Why the strange mixture? Sometimes Smart goes far further than it seems any of his contemporaries would have. Other times, he stops short of what seems to me the truly utilitarian solution.

On the criticism that utilitarianism compels us always in moral action, leaving us no time to relax, he offers two responses. The first is that perhaps people are too unwilling to act and would be better served by being more spurred on. The second is that it may be that relaxing today allows us to do ten times the good tomorrow.

(Personally, I expect the answer is both. Many people could do more than they currently do, while many others risk burnout unless they relax more. There is a reason the law of equal and opposite advice exists. Different people need to hear different things.)

But take this and his support for rules of thumb on one side and his support for executing the innocent man, or long spiel on how a bunch of people wireheading wouldn’t be that bad (a spiel that convinced me, I might add) and I’m left with an unclear overall picture. As an all-is-fine defence of utilitarianism, it doesn’t go far enough. As a bracing lecture about our degenerate non-utilitarian ways, it also doesn’t go far enough.

Leaving, I suppose, the sincere views of a man who pondered utilitarianism for much longer than I have. Chance is the only reason that makes sense. This would imply that sometimes Smart gives a nod to traditional morality because he’s decided it aligns with his utilitarian ethics. Other times, he disagrees. At length. Maybe Smart is a man seeking to rescue what precious moral truths he can from the house fire that is utilitarianism.

Perhaps some of my confusion comes from another confusion, one that seems to have subtly infected many utilitarians. Smart is careful to point out that the atomic belief underlying utilitarianism is general benevolence. Benevolence, note, is not altruism. The individual utilitarian matters just as much – or as little – as everyone else. Utilitarians in Smart’s framework have no obligation to run themselves ragged for another. Trading your happiness for another’s will only ever be an ethically neutral act to the utilitarian.

Or, I suspect, the wrong one. You are best placed to know yourself and best placed to create happiness for yourself. It makes sense to include some sort of bias towards your own happiness to take this into account. Or, if this feels icky to you, you could handle it at the level of probabilities. You are more likely to make yourself happy than someone else (assuming you’ve put some effort towards understanding what makes you happy). If you are 80% likely to make yourself happy for an evening and 60% likely to make someone else happy, your clear utilitarian duty is to yourself.

This is not a suggestion to go become a hermit. Social interactions are very rarely as zero sum as all that. It might be that the best way to make yourself happy is to go help a friend. Or to go to a party with several people you know. But I have seen people risk burnout (and have risked it myself) by assuming it is wrong to take any time for themselves when they have friends in need.

This is all my own thoughts, not Smart’s. For all of his talk of utilitarianism, he offers little advice on how to make it a practically useful system. All too often, Smart retreats to the idea of measuring the total utility of a society or world. This presents a host of problems and begs two important questions.

First, can utility be accurately quantified? Smart tries to show that different ways of measuring utility should be roughly equivalent in qualitative terms, but it is unclear if this follows at a quantitative level. Stability analysis (where you see how sensitive your result is to different starting assumptions) is an important tool for checking the veracity of conclusions in engineering projects. I have a hunch that quantitatively, utilitarian results to many problems will be highly unstable when a variety of forms of utilitarianism are tried.

Second, how should we deal with utility in the future? Smart claims that beyond a certain point we can ignore side effects (as unintended good side effects should cancel out unintended ill side effects; this is especially important when it comes to things like saving lives) but that doesn’t give us any advice on how we can estimate effects.

We are perhaps saved here by the same collapse that aligned normal utilitarians with negative utilitarians. If we cannot quantify joy, we can sure quantify misery. Doctors can tell you just how much quality of life a disease can sap (there are tables for this), not to mention the chances that a disease might end a life outright. We know the rates of absolute poverty, maternal deaths, and malaria prevalence. There is more than enough misery in the world to go around and certainly utilitarians who focus on ending misery do not seem to be at risk of being out an ethical duty any time in the near future.

(If ending misery is important to you, might I suggest donating a fraction of your monthly income to one of GiveWell’s top recommended charities? These are the charities that most effectively use money to reduce suffering. If you care about maximizing your impact, GiveWell is a good way to do it.)

Although speaking of the future, I find it striking how little utilitarianism has changed in the fifty-six years since Smart first wrote his essay. He pauses to comment on the risk of a recursively self-improving AI and talk about the potential future moral battles over factory farming. I’m part of a utilitarian meme group and these are the same topics people joke about every day. It is unclear if these are topics that utilitarianism predisposes people to care about, or if there was some indirect cultural transmission of these concerns over the intervening years.

There are many more gems – and frustrations in Smart’s essay. I can’t cover them all without writing a pale imitation of his words, so I shan’t try any more. As an introduction to the different types of utilitarianism, this essay was better than any other introduction I’ve read, especially because it shows all of the ways that various utilitarian systems fit together.

As a defense of utilitarianism, it is comprehensive and pragmatic. It doesn’t seek to please everyone and doesn’t seek to prove utilitarianism. It lays out the advantages of utilitarianism clearly, in plain language, and shows how the disadvantages are not as great as might be imagined. I can see it being persuasive to anyone considering utilitarianism, although in this it is hampered by its position as the first essay in the collection. Anyone convinced by it must then read through another seventy pages of arguments against utilitarianism, which will perhaps leave them rather less convinced.

As a work of academic philosophy, it’s interesting. There’s almost no meta-ethics or meta-physics here. This is a defense written entirely on its own, without recourse to underlying frameworks that might be separately undermined. Smart’s insistence on laying out his arguments plainly leaves him little room to retreat (except around average vs. total happiness). I’ve always found this a useful type of writing; even when I don’t agree, the ways that I disagree with clearly articulated theses can be illuminating.

It’s a pleasant read. I’ve had mostly good luck reading academic philosophy. This book wasn’t a struggle to wade through and it contained the occasional amusing turn of phrase. Smart is neither dry lecturer nor frothing polemicizer. One is put almost in the mind of a kindly uncle, patiently explaining his way through a complex, but not needlessly complicated subject. I highly recommend reading it and its companion.

Ethics, Philosophy

Against Moral Intuitions

[Content Warning: Effective Altruism, the Drowning Child Argument]

I’m a person who sometimes reads about ethics. I blame Catholicism. In Catholic school, you have to take a series of religion courses. The first two are boring. Jesus loves you, is your friend, etc. Thanks school. I got that from going to church all my life. But the later religion classes were some of the most useful courses I’ve taken. Ever. The first was world religions. Thanks to that course, “how do you know that about [my religion]?” is a thing I’ve heard many times.

The second course was about ethics, biblical analysis, and apologetics. The ethics part hit me the hardest. I’d always loved systematizing and here I was exposed to Very Important Philosophy People engaged in the millennia long project of systematizing fundamental questions of right and wrong under awesome sounding names, like “utilitarianism” and “deontology”.

In the class, we learned commonly understood pitfalls of ethical systems, like that Kantians have to tell the truth to axe murderers and that utilitarians like to push fat people in front of trains. This introduced me to the idea of philosophical thought experiments.

I’ve learned (and wrote) a lot more about ethics since those days and I’ve read through a lot of thought experiments. When it comes to ethics, there seems to be two ways a thought experiment can go; it can show that an ethical system conflicts with our moral intuitions, or it can show that an ethical system fails to universalize.

Take the common criticism of deontology, that the Kantian moral imperative to always tell the truth applies even when you could achieve a much better outcome with a white lie. The thought experiment that goes with this point asks us to imagine a person with an axe intent on murdering our best friend. The axe murderer asks us where our friend can be found and warns us that if we don’t answer, they’ll kill us. Most people would tell the murderer a quick lie, then call the police as soon as they leave. Deontologists say that we must not lie.

Most people have a clear moral intuition about what to do in a situation like that, a moral intuition that clashes with what deontologists suggest we should do. Confronted with this mismatch, many people will leave with a dimmer view of deontology, convinced that it “gets this one wrong”. That uncertainty opens a crack. If deontology requires us to tell the truth even to axe murderers, what else might it get wrong?

The other way to pick a hole in ethical systems is to show that the actions that they recommend don’t universalize (i.e. they’d be bad if everyone did them). This sort of logic is perhaps most familiar to parents of young children, who, when admonishing their sprogs not to steal, frequently point out that they have possessions they cherish, possessions they wouldn’t like stolen from them. This is so successful because most people have an innate sense of fairness; maybe we’d all like it if we could get away with stuff that no one else could, but most of us know we’ll never be able to, so we instead stand up for a world where no one else can get away with the stuff we can’t.

All of the major branches of ethics fall afoul of either universalizability or moral intuitions in some way.

Deontology (doing only things that universalize and doing them with pure motives) and utilitarianism (doing whatever leads to the best outcomes for everyone) both tend to universalize really well. This is helped by the fact that both of these systems treat people as virtually interchangeable; if you are in the same situation as I am, these ethical systems would recommend the same thing for both of us. Unfortunately, both deontology and utilitarianism have well known cases of clashing with moral intuitions.

Egoism (do whatever is in your self-interest) doesn’t really universalize. At some point, your self-interest will come into conflict with the self-interest of other people and you’re going to choose your own.

Virtue ethics (cultivating virtues that will allow you to live a moral life) is more difficult to pin down and I’ll have to use a few examples. On first glance, Virtue ethics tends to fit in well with our moral intuitions and universalizes fairly well. But virtue ethics has as its endpoint virtuous people, not good outcomes, which strikes many people as the wrong thing to aim for.

For example, a utilitarian may consider their obligation to charity to exist as long as poverty does. A virtue ethicist has a duty to charity only insofar as it is necessary to cultivate the virtue of charity; their attempt to cultivate the virtue will run the same course in a mostly equal society and a fantastically unequal one. This trips up the commonly held moral intuition that the worse the problem, the greater our obligation to help.

Virtue ethics may also fail to satisfy our moral intuitions when you consider the societal nature of virtue. In a world where slavery is normalized, virtue ethicists often don’t critique slavery, because their society has no corresponding virtue for fighting against the practice. This isn’t just a hypothetical; Aristotle and Plato, two of the titans of virtue ethics defended slavery in their writings. When you have the meta moral intuition that your moral intuitions might change over time, virtue ethics can feel subtly off to you. “What virtues are we currently missing?” you may ask yourself, or “how will the future judge those considered virtuous today?”. In many cases, the answers to these questions are “many” and “poorly”. See the opposition to ending slavery, opposition to interracial marriage, and opposition to same-sex marriage for salient examples.

It was so hard for me to attack virtue ethics with moral intuitions because virtue ethics is remarkably well suited for them. This shouldn’t be too surprising. Virtue ethics and moral intuitions arose in similar circumstances – small, closely knit, and homogenous groups of humans with very limited ability to affect their environment or effect change at a distance.

Virtue ethics work best when dealing with small groups of people where everyone is mutually known. When you cannot help someone half a world away, it really only does matter that you have the virtue of charity developed such that a neighbour can ask for your help and receive it. Most virtue ethicists would agree that there is virtue in being humane to animals – after all, cruelty to other animals often shows a penchant for cruelty to humans. But the virtue ethics case against factory farming is weak from the perspective of the end consumer. Factory farming is horrifically cruel. But it is not our cruelty, so it does not impinge on our virtue. We have outsourced this cruelty (and many others) and so can be easily virtuous in our sanitized lives.

Moral intuitions are the same way. I’d like to avoid making any claims about why moral intuitions evolved, but it seems trivially true to say that they exist, that they didn’t face strong negative selection pressure, and that the environment in which they came into being was very different from the modern world.

Because of this, moral intuitions tend to only be activated when we see or hear about something wrong. Eating factory farmed meat does not offend the moral intuitions of most people (including me), because we are well insulated from the horrible cruelty of factory farming. Moral intuitions are also terrible at spurring us to action beyond our own immediate network. From the excellent satirical essay Newtonian Ethics:

Imagine a village of a hundred people somewhere in the Congo. Ninety-nine of these people are malnourished, half-dead of poverty and starvation, oozing from a hundred infected sores easily attributable to the lack of soap and clean water. One of those people is well-off, living in a lovely two-story house with three cars, two laptops, and a wide-screen plasma TV. He refuses to give any money whatsoever to his ninety-nine neighbors, claiming that they’re not his problem. At a distance of ten meters – the distance of his house to the nearest of their hovels – this is monstrous and abominable.

Now imagine that same hundredth person living in New York City, some ten thousand kilometers away. It is no longer monstrous and abominable that he does not help the ninety-nine villagers left in the Congo. Indeed, it is entirely normal; any New Yorker who spared too much thought for the Congo would be thought a bit strange, a bit with-their-head-in-the-clouds, maybe told to stop worrying about nameless Congolese and to start caring more about their friends and family.

If I can get postmodern for a minute, it seems that all ethical systems draw heavily from the time they are conceived. Kant centred his deontological ethics in humanity instead of in God, a shift that makes sense within the context of his time, when God was slowly being removed from the centre of western philosophy. Utilitarianism arose specifically to answer questions around the right things to legislate. Given this, it is unsurprising that it emerged at a time when states were becoming strong enough and centralized enough that their legislation could affect the entire populace.

Both deontology and utilitarianism come into conflict with our moral intuitions, those remnants of a bygone era when we were powerless to help all but the few directly surrounding us. When most people are confronted with a choice between their moral intuitions and an ethical system, they conclude that the ethical system must be flawed. Why?

What causes us to treat ancient, largely unchanging intuitions as infallible and carefully considered ethical systems as full of holes? Why should it be this way and not the other way around?

Let me try and turn your moral intuitions on themselves with a variant of a famous thought experiment. You are on your way to a job interview. You already have a job, but this one pays $7,500 more each year. You take a short-cut to the interview through a disused park. As you cross a bridge over the river that bisects the park, you see a child drowning beneath you. Would you save the child, even if it means you won’t get the job and will have to make due with $7,500 less each year? Or would you let her drown and continue on the way to your interview? Our moral intuitions are clear on this point. It is wrong to let a child die because we wish to more money in our pockets each year.

Can you imagine telling someone about the case in which you don’t save the child? “Yeah, there was a drowning child, but I’ve heard that Acme Corp is a real hard-ass about interviews starting on time, so I just waltzed by her.” People would call you a monster!

Yet your moral intuitions also tell you that you have no duty to prevent the malaria linked deaths of children in Malawi, even you would be saving a child’s life at exactly the same cost. The median Canadian family income is $76,000. If a family making this amount of money donated 10% of their income to the Against Malaria Foundation, they would be able to prevent one death from malaria every year or two. No one calls you monstrous for failing to prevent these deaths, even though the costs and benefits are exactly the same. Ignoring the moral worth of people halfway across the world is practically expected of us and is directly condoned by our distance constrained moral intuitions.

Your moral intuitions don’t know how to cope with a world where you can save a life half the world away with nothing more than money and a well-considered donation. It’s not their fault. They didn’t develop for this. They have no way of dealing with a global community or an interconnected world. But given that, why should you trust the intuitions that aren’t developed for the situation you find yourself in? Why should you trust an evolutionary vestige over elegant and well-argued systems that can gracefully cope with the realities of modern life?

I’ve chosen utilitarianism over my moral intuitions, even when the conclusions are inconvenient or truly terrifying. You can argue with me about what moral intuitions say all you want, but I’m probably not going to listen. I don’t trust moral intuitions anymore. I can’t trust anything that fails to spur people towards the good as often as moral intuitions do.

Utilitarianism says that all lives are equally valuable. It does not say that all lives are equally easy to save. If you want to maximize the good that you do, you should seek out the lives that are cheapest to save and thereby save as many people as possible.

To this end, I’ve taken the “Try Giving” pledge. Last September, I promised to donate 10% of my income to the most effective charities for a year. This September, I’m going to take the full Giving What We Can pledge, making my commitment to donate to the most effective charities permeant.

If utilitarianism appeals to you and you have the means to donate, I’d like to encourage you to do the same.

Epistemic Status: I managed to talk about both post-modernism and evolutionary psychology, so handle with care. Also, Ethics.

Philosophy

Cutting the Gordian Knot: Bad Solutions to Good Paradoxes

Russel’s Paradox

Image Credit: Donald on Flickr

In a village, the barber shaves everyone who does not shave himself, but no one else. Who shaves the barber.

Imagine The Barber as similar to The Pope. When he is in his shop, cutting hair, he is The Barber and has all of the powers that entails, just as The Pope only possesses the full power of papacy when speaking “from the chair”. When The Barber isn’t manifesting this mantle, he’s just Glen, the nice fellow down the lane. Glen shaves his own beard. The Barber therefore doesn’t have to.

Alternatively, the barber is a woman.

Omnipotence Paradox

Image Credit: Tim Green on Flickr

Can God create a rock so large that he himself cannot lift it?

It depends.

In Christian theology, God is often considered all-knowing, all-powerful, and all-loving. Some theologians dispute each of these, but most agree he has at least some mix of those three attributes. It turns out the answer to this paradox depends on which theologians are right.

This question is only interesting if god is all-powerful. If God isn’t all powerful, then this question will be determined by which is greater: his power of creation, or his power to manipulate creation. That’s a boring answer, so let’s focus on the cases where God is all powerful.

If God is all-knowing, then we’ll probably be left unsatisfied. God will know if he can or cannot create the boulder, so he’ll probably feel no need to test if he can.

If God is not all-knowing but is all-loving, then the question will only be answered if God cannot lift the first boulder he creates. If he can lift the first one, he will quickly realize that he could end up spending all of eternity trying to make a big enough boulder on the off chance that this is the one he finally cannot lift. An all-loving God would not abandon his flock for such a meaningless task, so we’ll never see the answer.

If God is neither all-knowing nor all-loving and has at least a bit of curiosity, then we should be able to eventually observe him trying to create a boulder large enough that he cannot lift it. This God won’t know the answer and wouldn’t necessarily care that finding out requires abandoning all of his other duties.

Given that this question was first posed right before the crusades, I believe that we’re experiencing the third scenario. The mere act of raising this paradox caused God to turn his face away from the world and worry about more interesting problems than those caused by a bunch of jumped up apes.

Zeno’s Paradox

Image Credit: Miranche on Wikimedia Commons

If you want to go somewhere, you first have to get halfway there. But to get to the midpoint, you have to go a quarter of the way. But to get to a quarter… When you subdivide like this, you’ll see that there are an infinite number of steps you must take to go anywhere. You cannot accomplish an infinite number of tasks in a finite time, therefore, movement is impossible.  

It’s a common mistake that space is infinitely sub-dividable. In fact, there is a limit to how finely you can cut space. You cannot cut the universe more finely than 1.61x 10-35m, a length called the Planck Length. The Planck length is to the width of a hair as the width of a hair is to the whole universe. It’s an unimaginably tiny length.

An important property of halving things: you get really small numbers very quickly. If you halve a distance of 1m a mere 116 times, you’ll have cut the distance as finely as it is possible to cut anything. At this point, you can halve the distance no more and you can proceed to your destination, one Planck length at a time.

Sorites Paradox

Image Credit: David Stanley on Flickr

There is a pile of sand in front of you. If you remove a grain of sand from it, it will still be a pile. If you remove another, it will still be a pile. But if you keep removing them, eventually it won’t be. When does it stop being a pile?

I’m emailing ISO and the NIST about this one. I expect to have an answer after ten years and three hundred committee meetings.

The Ship of Theseus

Image Credit: Verity Cridland on Flickr

The Athenian Theseus bequeathed his ship to the city. As the ship aged, the Athenians kept it in perfect condition by replacing any planks and fittings that rotted away. Eventually, the entire ship had been replaced. This caused all of the philosophers in Athens to wonder: was it still Theseus’s Ship.

We could leave this one to ISO as well, but luckily as a Canadian I have another recourse.

The Comprehensive Economic and Trade Agreement between the EU (of which Greece is a member) and Canada considers a car “Made in Canada” or “Made in Europe” if at least 50% of the car came from there and at least 20% of the manufacturing occurred there.

Treating boats with a similar logic, we can see that as long as the Athenians were using local materials and labour (and weren’t outsourcing to the Persians or Phoenicians), the ship would count as “Made in Greece”. Since the paradox specifically states that the Athenians were doing all the restoring, this is probably a safe assumption.

If we take this and assume that Theseus had a solid grounding in trademark law – which would allow us to assume that he made his ship a protected brand like Harris Tweed, Kobe beef, Navaho, and Scotch – then we can see that the ship would still fall under the Theseus’s Ship™ brand. Most protected brands require a certain geographic origin, but we’ve already been over that in this case.

Even when philosophers argue that the boat is no longer Theseus’s Ship, they have to admit it is Theseus’s Ship™.

Unexpected Hanging Paradox

Image Credit: Adam Clarke on Flickr

A prisoner is sentenced to hanging by a judge. The judge stipulates that the sentence will be carried out on one of the days in the next week, that it will be carried out before noon, and that it must be a surprise to the prisoner.

The prisoner smirks, believing he will never be hung. He knows that if it is Thursday at noon and he hasn’t been hung, then the hanging would have to be on Friday. But then it wouldn’t be a surprise. So logically, he has to be hung before Friday. If this is the case though, he can’t be hung on Thursday, because if he hasn’t been hung by noon on Wednesday then a hanging on Thursday won’t be a surprise. Following through this logic, the prisoner could only be hung on the Monday. But then it will be no surprise at all!

This is indeed a problem if the judge is as good at logic as the prisoner. But if the judge remains blissfully unaware of logical induction, there is no paradox here. The judge will assume that by picking a day at random she can surprise the prisoner. The prisoner will no doubt be quite surprised when he is hung.

This becomes more likely if we set the problem in America, where some judges are elected and therefore aren’t governed by anything so limiting as qualifications.

Ethics, Philosophy, Politics

What use a Monopoly on Violence?

Remember Horseshoe Theory? It’s the observation that in many ways, the extremist wings of political movements resemble each other more than centrists or their more moderate brethren. We see this in anti-Semitism, for example. In any given week this year, you’re about as likely to see anti-Semitism come from Stormfront… or the British Labour Party.

I’ve been thinking about horseshoe theory in light of another issue: the police. Let me explain.

Like most denizens of the internet, I’ve been exposed to libertarians of various persuasions. One common complaint I’ve seen among these libertarians is a belief that the state has an illegitimate monopoly on violence. This is most frequently bundled with calls to abolish the police in specific and government in general. Now I see calls to abolish the police coming from the left.

I disagree strongly with calls to abolish the police. It’s not that I’m a great fan of the police: I’m a member of the Canadian Civil Liberties Union and I believe in strong checks and balances on law enforcement power. It’s just that one lesson we’ve learned repeatedly over the past century is that radical change to public institutions rarely goes smoothly. We should always remember caution when people suggest tearing up everything we already know without really planning for what will happen next.

So despite high profile incidents of unjustified police violence, I support the state’s monopoly on the means of violence. Beyond simple caution, here are my reasons.

Convenience

Violence has been with us forever. War is rightfully one of the four horsemen of the apocalypse, one of those four almost primal forces responsible for killing so many humans. Trying to reduce violence is important. But it isn’t the only fight. Any policy proposal sees diminishing returns. Beyond a certain point, effort that could be spent reducing violence could more effectively improve lives through other means (for example, by fighting malaria, or global warming).

We could reduce violence conducted by the state by abolishing the police. But state violence is a useful lever for other policy priorities. Trying to reach other goals (like economic equality or public order) are often worth some risk of state violence.

This process of trading-off must be undertaken by each body politic, as willingness to tolerate risk differs between countries. Canada, America, and Switzerland, for example, all have accepted higher rates of gun violence than other developed countries in exchange for more freedom to own and use firearms.

People generally have a right to own whatever they want to own. People also have a right not to be randomly shot. With guns, these two rights can be in conflict. The more people who have guns, the more likely I am to be randomly shot. Society has to come together and negotiate a trade-off between these two rights that they can (collectively) stomach. The weird thing about these negotiated trade-offs is that they can look ridiculous, even from inside of one (ask any American liberal how they feel about gun rights and you’ll see what I mean). It is certainly possible to have values such that no amount of firearm ownership is justifiable if it leads to deaths. Just as it is possible to have values such that no amount of intoxicant usage is permissible if it leads to death. [1]

Like intoxicants or guns, society must negotiate on the amount of violence it will permit. These negotiations are most convenient when they can be done with a single organization, or a single umbrella group. Consider, for example, the relative difficultly of abolishing the death penalty (one form of violence undertaken by states) in Singapore, America, and Syria.

In Singapore, abolishing the death penalty would be relatively simple (not to be confused with easy). There is one organization (the city-state) with an absolute monopoly on violence. To abolish the death penalty, lobbyists can focus their effort on one group of people. They will probably be opposed, because any organization who wishes to keep the death penalty will also know exactly who to lobby. This isn’t so much a strength or weakness as it is the endpoint of yet another negotiation. Singapore has chosen a system of government where people only need to worry about one set of rules. This is a sensible choice for a small, densely populated island without a lot of local variation.

In America, there are fifty-one authorities that must be lobbied in order to abolish the death penalty. Each state has a limited monopoly on violence solely within its borders (and therefore controls crime and punishment within them). But there is also a federal government that has a separate limited monopoly on violence, in this case, violence across state lines or against the union as a whole. In such a system, it is perhaps easier for opponents of certain types of violence to see them abolished in one region or another (see, for example, the death penalty in Massachusetts), but much harder to see it abolished across the nation as a whole.

I should mention that this isn’t just a matter of scale or population size. Canada is also a federal democracy, but the monopoly on violence is held solely by the federal government. Therefore, there was only one organization that had to be convinced to end the death penalty.

Imagine now trying to abolish the death penalty in Syria. You would have to negotiate with the Assad Regime, the Kurds, Daesh, Al-Nusra, and the scores of small rebel groups that hold and administer territory. Not only will you face difficulty in each negotiation, you will face difficulty even trying to negotiate, because there is no umbrella organization with the means to force smaller subdivisions of political power to allow you freedom of movement or guarantee minimum rights. This is a different situation than in America, where the federal government uses (what is ultimately) the threat of violence to ensure that states allow the free flow of commerce, ideas, and people.

A single organization (or set of franchises) with a monopoly on violence doesn’t just make it easier to target specific cases of violence. It can in fact reduce the overall amount of violence in a society simply by virtue of existing. This is the other reason that Syria sees much more violence than polities where there is an organization that holds a monopoly on violence. As long as no organization exists to use the threat of violence to force other actors to refrain from violence – to jealously guard its own monopoly on violence, as it is – then these actors will use violence in disagreements with each other.

In a civil war, the central government loses its monopoly on violence and other actors attempt to use violence to gain their own monopoly. We see the same pattern of increasing violence in the Mexican Drug Trade. Aggressive government enforcement broke cartel monopolies on local violence, allowing for various groups to fight to attempt to create their own hegemony.

In the context of police violence, having one group to negotiate with is extremely useful. It means that there’s only one battle to be fought. And in constitutional democracies, it gives reformers a powerful weapon by way of the court system. The courts may force (using the threat of violence) individual police departments to conform to certain practices. Imagine a country instead with only private security forces and a court system without access to the threat of violence. It would be impossible to enforce any rulings on these private security forces.

Abolishing the police will not abolish people’s desire for protection. Leftists should be scared of unaccountable private security firms. Anyone who loves peace and order should be scared of the conflicts between these firms.

17th Century Philosophy

There is a very short list of political philosophers whose works have shaped and guided revolutions. To have written works that inspire such drastic change in society doesn’t require or even suggest correctness. But it does suggest an understanding of the values that people hold closest to their hearts.

The 17th century English philosopher John Locke is on that list. I’ve written about Locke in the context of justice before, but his ruminations on the state of nature are also applicable here.

During Locke’s life, there was open debate among philosophers as to the “state of nature” – the shape human existence would take without government or laws. The state of nature was an artificial construct. It shares more with the ideal zero energy state used in molecular dynamics simulations than it does with prehistorical societies; it’s a baseline to compare political arrangements with, much as zero energy states are a baseline to compare molecular arrangements with.

Hobbes famously claimed that in the state of nature life was “solitary, poor, nasty, brutish, and short” – a war of all against all. On the other hand, Jean-Jacques Rousseau believed that the state of nature was the only state of true freedom; to him it was much preferable to life in the eighteenth century.

John Locke held a different view. He believed that the state of nature was generally pleasant – in the state of nature, all people had the rights “to order their actions, and dispose of their possessions and persons, as they think fit, within the bounds of the law of nature.” These “natural laws” might be broken by some people, Locke reasoned, at which point all people would have a right to punish them for their transgressions (as you can see, Locke was a Christian philosopher and his work is riddled with references to The Almighty; a less religious appeal to natural law would be an appeal to the moral impulses that seem to be more or less universal).

Locke did see one problem with this set-up. In most cases, those most likely to pursue justice would be the aggrieved party. While Locke believed that natural law gave everyone a right to punish wrongdoers, he also believed that in practice punishment would come from those they wronged. Locke understood that people were imperfect and not always capable of mercy nor proportionality. So Locke reasoned that justice could not exist without society and the people society appoints to mete it out.

Locke’s judges would by necessity need some force of bailiffs to assist them. There is an enormous amount of practical tasks that need to be done for judges to do their jobs. Suspects must be apprehended and interrogated, witnesses interviewed, physical evidence collected, and crimes investigated. These tasks must also be undertaken by someone other than the aggrieved party for there to be any chance at fairness. This is where police come in.

I don’t believe that the police are the only thing preventing us from existing in Hobbes’s state of nature. People are basically good and just. But they are also flawed and imperfect, closer to monkeys than gods. I also don’t believe in Rousseau’s claims of an earthly paradise; institutions do too much good for me to believe that life would improve without them (although, had I lived when he did, I may have felt differently). Locke, Locke I believe got it right. Without government, most people would be good, help their neighbours, and continue as they always had. But some people would take what isn’t theirs or hurt others.

I’ve heard total equality bandied about as a solution to the problem of violence and theft in the absence of the police. The logic goes that if everyone had total equality, we wouldn’t need police. This isn’t a real solution. Inequality currently exists. There is no way to redistribute possessions that isn’t coercive. You’re not going to convince Peter Thiel to give away his possessions out of the goodness of his heart (he doesn’t have one, except in the literal sense). The only way to force him to give money away is through the threat of force. This is impossible without an organization capable of carrying through on that threat. All legislation, whether it’s criminal law, CO2 emissions targets, or consumer protection, relies ultimately on the threat of violence against those who don’t follow it. Redistributive legislation – taxation – is no different.

Perhaps we could achieve equality and then abolish the police. But equality is a disequilibrium. Even if all skills were equally in demand (they aren’t) and all people equally capable of work (they aren’t), innate differences in desire for work or possessions would remain. Some people would work more – and presumably be rewarded more – than others. Even at the height of collectivism in communist Russia, with private ownership of any means of production outlawed, people found ways to game the system or took to the black market to accrue wealth. Equality can’t last without someone to enforce it, violently if it comes to that. You can call these enforcers whatever you want, but they will always be essentially ‘the police’.

Leaving that problem aside, there is no evidence that equality would stop all crime. In a society that undergoes radical transformation, there would be sore losers, willing to fight to get their old power back. There would also be all the crime that has nothing to do with wealth or possessions. Equality can’t stop murders committed by jealous spouses, road rage, hate crimes, vicious bullying, and a host of other crimes that draw their motive from something other than worldly possessions.

So this society without police would have to deal with crime. John Locke’s theories on the state of nature show us how this would fail. Justice, if it could even be called that, would become a private good, available to those with the resources to pay for it (admittedly, not a problem if you’re violently enforcing equality) or the wherewithal to do it themselves.

But would it really be justice? If society wanted to maximize the number of wrongdoers it punished, then it wouldn’t bother with things like “reasonable doubt” or “right to an attorney”. One of the little discussed uses of the police is to make it look like things are being done whenever there is a scare around criminal activity, so as to prevent public panic. Police might authorize extra patrols not to protect the public, but to protect people matching the description of alleged criminals from vigilante “justice”.

Without the police, people would have to seek their own justice. And they’d do it poorly. Given that society (at least, every society I know of) is racist, can we really expect individual people to do it any better than the police? Imperfect due process (and I know the due process counts for far less when you aren’t white) is surely better than none. Without the police, people of colour face a nation of George Zimmermans.

Recent Statistics

FiveThirtyEight.com has looked at violent crime data out of Chicago after the video of Laquan McDonald’s murder was released. They found a (statistically) significant increase in violent crimes, correlated with a decrease in proactive police behaviour (here measured by a decrease in police patrols and stops). They weren’t able to tease out the root cause of the decrease in proactive policing (it could have been the release of the new video or an increase in the amount of paperwork officers now must do after interacting with the public). The increase in violent crime bucks seasonal trends and can’t be blamed on a warmer than average winter – winters even warmer than the last one have seen no large spike in deaths.

This should not be surprising in light of the earlier sections. When the police are proactive, it is clear that the state has a monopoly on violence and is willing to use it. But as the police retreat and arrests go down, we see both the effects of different groups competing to fill the void and reprisal killings (which are much more difficult when suspects are behind bars).

I don’t wish to say that the answer to all violent crime is more police patrols and more random stops. As the FiveThirtyEight article points out, there are costs associated with proactive policing. Sometimes police tactics labelled as proactive are also unconstitutional. Opposing unconstitutional police tactics – even if they reduce violence – is one of the trade-offs around violence I discussed earlier and one I strongly endorse. If alienation, segregation, and police violence is the price we pay for a reduction in violence through proactive policing, then I would believe it to be a price not worth paying. Some police tactics should be off the table in a free and democratic society, even if they provide short term gains.

But if, on the other hand, proactive policing saves lives without damaging communities and breeding alienation, then I would oppose rolling back these policies. One article in a newspaper – even one renowned for its statistical acumen – isn’t enough to drive public policy. More research on the costs and benefits of various policing programs, including controlled studies is desperately needed. To this end, the lack of a centralized police shooting database in the United States is both a national tragedy and a national disgrace.

A Legitimate State Monopoly Over the Means of Violence

The modern definition of a state acknowledges that it must have a monopoly on the means of violence within a territory. Without this monopoly, a state is powerless to do most of the things we associate with a state. It cannot enforce contracts or redistribute wealth. It cannot protect the environment or private property rights. I have yet to see a single serious policy proposal that adequately addresses how these could be accomplished without police.

This is all not to say that the current spate of police shootings is tolerable or should be tolerated. Free and open societies can and must expect better behaviour from those they empower with the ability to use violence in undertaking the aims of the state.

As citizens of a free and democratic society, we should continue to pressure our leaders to accept and perpetrate less violence. But we also must acknowledge that the bedrock our society is built on is the threat of physical force. This doesn’t make our society inherently illegitimate, but it does mean we must always be contemplative whenever we empower anyone to use that force – even if they’re people we otherwise agree with and especially when force is used primarily against the most vulnerable members of society.

We should fight for a society where the government holds only a legitimate monopoly on the means of violence. Where violence is used only when truly necessary and not a moment sooner. Where security forces are truly subservient to civilian leaders. Where police shootings of unarmed civilians are an aberration, not a regular occurrence. We aren’t there yet. But we could be.

Epistemic Status: Ethics


[1] Trade-offs between different rights are the proper territory of legislation and acknowledging this is separate from the harmful moral relativism that has infected leftist rhetoric on international relations. There is a distinct difference between trade-offs among competing rights and a fearful refusal to acknowledge universal and inalienable human rights.

 

Ethics, Philosophy

Precedent Utilitarianism: A Primer

Preamble

When I first heard about deontology, I was intrigued. Here was an ethical system that could break you, if you weren’t careful. I was young and hadn’t really systematized my morality yet, but I dearly wanted to. I’d just learned about the stages of moral development and I felt a keen need to be at Kohlberg VI.

Time passed and I forgot that systematizing was a goal of mine. While I aimed for consistency across my moral principles, I did this largely blindly, lacking a single meta-principle to guide me.

Last year, I read Eichmann in Jerusalem, A Report on the Banality of Evil, the (in)famous book by Hannah Arendt. The only ethics mentioned in the book is Kantian and Arendt herself is hard to pigeonhole into any one system. But reading the book set my mind afire. By the time I finished it, I knew what kind ethical system I wanted to drive me. I just didn’t have a name for it.

Arendt had shown the weaknesses in deontology, shown how someone who didn’t think, who just followed the right as their society defined it could, with no irony, claim to be a Kantian while committing the most unimaginable crimes. At the same time, Arendt’s response to the judges, her justification for Eichmann’s death felt wrong to me. I never disagreed with Arendt more than when she said: “certain procedures… important in [their] own right can never be permitted to overrule justice, the law’s chief concern.”

I filled up the whole last page of Eichmann in Jerusalem with a cramped response to Arendt. I felt like her conception of justice was little better that vengeance and that justice couldn’t exist without the procedures she just disparaged.

Eichmann in Jerusalem left me with nagging questions and an empty space I yearned to fill. It would be a while before I had my answers.

First and Second Order Utilitarianism

The summer after reading Eichmann in Jerusalem, I flirted with utilitarianism. I wasn’t entirely satisfied with it. It’s not that I mind debating torture vs. dust specks or trying to select a value function. My problems were partially caused by the fact that I’m a romantic and utilitarianism is cold and utilitarian. But it’s also that I continue to worry about systems and precedents. For me, too many discussions about utilitarianism stick to the object level. I wanted to talk about the ripple effects of every decision and found often there was no room to.

One day, I found myself looking for high value books to read. One option was Utilitarianism: For and Against, a book I didn’t read until long after this post post was published. Luckily, even before I read it, it led me to the concept of precedent utilitarianism. Finally, I had a name to put to the nagging voice inside of my head. I read a quick summary of precedent utilitarianism and knew that I had the ethical system I was looking for.

I’ve previously written an overview of several types of utilitarianism. At the end, I mention that they’re all what I call “first-order utilitarianism”.

Precedent Utilitarianism is a form of second-order utilitarianism. It doesn’t just look at first-order consequences of an action. It looks at the precedents an action sets.

Precedents

I wrote an essay about justice that focused on precedents. In it, I make the claim that “precedent is what changes actions from unprecedented to normal”. This may sound facile or even tautological. But there is a deeper point I’m driving at. For every action now considered normal, there was someone who was the first to do it. In Eichmann in Jerusalem, Hannah Arendt mentions something similar in passing. She believes that that the recurrence of any crime is more likely than its invention.

Many actions are done once, then never again. Or only a few times, by a few isolated groups. Others get repeated and copied until they become the new normal. The Manson family murders did not lead to a sudden outbreak of murderous cults. But the actions of Marius and Sulla led almost directly to the Triumvirate and the unravelling of Roman democracy.

What makes Manson’s actions different than Sulla’s? It isn’t just that murder is more horrific than dictatorship. A cursory glance at the history of the last half-century of ethnic cleansings lends some credence to Arendt’s belief that after the Nazis systematized genocide many others would follow in their footsteps.

Why some crimes and not others? I think the answer to this question lies in part with the influence or charisma of the person setting the precedent. Hitler committed his grievous crimes at the helm of a country. Sulla was surrounded by patricians who wished that it had been them who seized Rome. Charles Manson has been influential in certain underground scenes. But he never led a country or commanded more than thirty people.

So what we currently know about precedents is: they can be set by any action and are more likely to be set by people who command a significant following. Oh and one final thing. In common law jurisdictions, every single judicial ruling sets a legal precedent, which is enforceable on all lower courts within the same jurisdiction. This is the most literal manifestation of a precedent, an action that is inscribed in law as allowed or disallowed, all because someone asked a judge to rule on it.

Precedent Utilitarianism

With the information we just gathered about precedents, we can create a second-order utilitarianism that incorporates them.

In theory, it’s pretty easy. You take whatever value function you prefer to use. You take the proposed action. You feed it into the value function to determine the utility of the action. This is just like first-order utilitarianism.

But in precedent utilitarianism, you then you think about how likely the action is to create a precedent and how many people the precedent could effect. If you’re not famous and you don’t expect your action to be well publicized, then you only need to worry about precedents set among your immediate acquaintances. If you’re the Prime Minister or President of an important country your audience will be considerably larger. And if you intend to defend your actions in the court of law in a common law jurisdiction, you must worry about the specific legal precedents you’ll potentially set. Legal precedents allow actions undertaken by a single person to be at least as momentous as those undertaken by a head of state. Just look at Oakes or Roe.

Once you know the how likely and how large, you need to think about who will use the precedent and how. If you think it is ethical for your preferred politician to cover up wrong-doing because you think there is a lot of utility in her being elected, remind yourself that if she gets away with it (for a while), then she’s set a precedent that may also be used by the politicians you despise.

Given all the people affected by the precedent, their chances of using it for various things, and the potential utility or disutility of these things, you can calculate an updated net utility for the action.

Simple, right?

You may have noticed the problem. Utility function calculations beyond simple QALY evaluations are really hard. Adding in a bunch of hypothetical actions from a bunch of hypothetical people just makes it harder. And if the calculations are already impossible, it doesn’t do you much good to have an even harder set of calculations that you’re supposed to somehow pull off.

Heuristics

Precedent utilitarians (or utilitarians in general) would point out that the correct solution to calculations that literally take forever isn’t to spend forever doing them. There’s an opportunity cost to spending all your time thinking and none of it doing and this cost is considerable. The common solution is to do the best action you can see after a reasonable period of reflection and estimation of utility.

What represents a reasonable amount of time to spend on reflection and a reasonable resolution for the estimation depends on how important the decision is. Decisions about which restaurant to go to should be very quick and simple and largely guided by factors other than morality (for example, your local public health agency’s evaluations, or more reasonably, what kind of food you want to eat). Decisions like “where should I donate ten percent of my income” require a fair amount of reflection. But decisions like: “should we go to war with that dictator” require far more. The more potential there is to influence lives, the more it makes sense to sink resources into determining the optimal actions.

When it doesn’t make sense to spend dozens of hours on contemplation, there are a few simple heuristics that the precedent utilitarian can use.

First: is the action likely lead to an improvement in utility from a first-order utilitarian perspective? If the answer here is no and you don’t have an plausible mechanism for the action setting a precedent that will redeem the negative utility incurred in the first-order analysis, then you should trust the first order analysis and avoid the action.

Second: How potentially harmful is the action if generalized? If your worst enemy did the same thing, would it reduce the utility of the world? If you’re attempting to ban a certain sort of speech, for example, the general class of thing you’re doing is “banning speech”. I think we can all agree that the people we disagree with could ban speech in such a way that it would reduce the utility of the world. But if we’re making it illegal to assault someone, there are few ways that our foes can take “don’t hurt people who don’t want to be hurt” and make it reduce the utility of the world.

In general, the goal here is to consider ways that others acting along the same general principle could help or harm the world.

Third: Consider how strong a precedent you’re setting and how likely it is that others can also advocate along the same general principle now that you’ve made it easier. Remember also that special pleading (“no, you can only act along this principle in the ways we say you can”) and hypocrisy (getting angry at others who are doing the same thing you did, just from a different set of axioms and beliefs about the world) are very off-putting and can turn people against you.

The second heuristic deals with how your precedent can be used against you. The third heuristic with how likely this is to happen.

Fourth: Add this all up. If the precedent you set is safe (very difficult to use to decrease the utility of the world), your power is secure (the precedent is unlikely to be used in ways that you think will decrease utility), you’re unimportant (the precedent isn’t going to be used by anyone else), and your public support is non-fragile (you can survive hypocrisy or special pleading) then you can decide on first-order grounds. If a few of these aren’t true but you stand to gain a lot of utility, it remains safe to decide on first-order grounds. But if none of the conditions are met then it may very well be possible that you’d stand to lose net utility from second order effects. In this case, it probably makes sense to put your plan on hold while you spend more time calculating possible outcomes.

Other Ethical Systems

It’s a safe bet that most people aren’t utilitarians. It’s also true that you will eventually have to interact with people who are both not utilitarians and have different axioms[citation needed]. In both of these cases (but especially the second), it can be hard to productively express and argue about views. Some people avoid this problem entirely by embracing the comforting lie that those who disagree with them do so out of lack of education or stupidity. Alas, this uncharitable explanation is far too often just not the case. Sometimes you’re stuck arguing with someone who has beliefs that are just as internally consistent, logical, and evidence based as yours.

When faced with someone like this, you have options. You can ignore them, making like Spain and Portugal and partitioning the world between you (and then griping to your friends on Reddit/Tumblr about how stupid the other side is). You can fight them, attempting to kill and subjugate them (this one has largely fallen out of style in many places, thank goodness), or you can find common principles that you agree on that will allow you both to live in peace and have mutually beneficial relationships.

Precedent utilitarianism is very well suited to building up systems like liberal democracy, where differing groups can draft a mutually agreeable framework that allows them to live peacefully. Precedent utilitarians naturally look for principles that everyone can agree on and tend to support strong constitutional protections around many classes of actions that don’t affect other people.

On a smaller scale, precedent utilitarianism is useful when you need to convince someone with a different set of axioms or a differing ethical system that you are a reasonable person who is worth listening to. A natural effect of precedent utilitarianism is avoiding (in most cases) special pleading (whether out of desire to not alienate support, or because you’re worried about precedents your actions can set).

Avoiding special pleading makes you look principled. Someone can respect you arguing against one of their proposed plans of action (and give your arguments much more credence) if they’ve also seen you argue against other actions (especially ones they would expect you to support given your axioms) using the same general principle.

For example, if you’re a Catholic and are arguing against having Buddhist prayers at a town hall meeting you’ll have much more credibility if you have previously opposed having Catholic prayers read at town hall meetings (perhaps because you’re worried that it sets a precedent that could lead to other prayers being read, which might lead to less utility in terms of saved Catholic souls). If instead you’d previously argued in favour of Catholic prayers but are now arguing that separation of church and state preclude prayers in meetings, thein no one will take you seriously. Worse, they will probably have assorted ill feelings towards you, making you less effective at convincing them even in unrelated matters.

Practical Examples

I want to give examples of the heuristics I discussed earlier in action. To make this essay interesting to people with a variety of axioms, I’ve picked two examples of legislative interventions proposed by different groups and argued against each intervention using the axioms (as best I understand them) of the people who I’ve observed suggesting it. First I’ll use activist left axioms. Then I’ll try and pass an ideological Turing test and pull off small government religious conservative axioms.

 Activist Left

There is a growing clamour from leftists to shut down police unions. The logic goes that police unions advocate for the good of their members at the expense of society at large and most particularly, those already disadvantaged by race, sexual orientation, gender expression, poverty, mental illness, or a combination of these factors.

These activists generally believe that without the political clout and collective bargaining ability of police unions it would be easier to require officers to wear body cameras, easier to demilitarize the police, and easier to ban discriminatory practices like carding and stop and frisk. They also believe that without union representatives it would be much easier to suspend and fire officers suspected of misusing force.

Let’s assume (for the sake of argument) that activists are correct and dismantling police unions would reduce police violence. A reduction in police violence would lead to an increase in utility for almost any value function, as long as there weren’t direct effects that led to counterbalancing increases of violent crime. Let’s assume that even if there are some negative side effects, there is ultimately an increase in utility. This lets us move on to the second step.

(The proper utilitarian thing to do here would be look into studies and data analysis about what the likely crime effects of such a move would be. Because the focus of this essay is precedent utilitarianism, I’m not going to go into the nitty gritty here. I’m just going to do what the proponents do and assume everything will work out OK.)

The generalized action here is: “it is acceptable to weaken collective bargaining rights or forcibly de-unionize workers”. If you are a leftist, I want you to take a moment and imagine what sort of effects there would be if your worst enemy did this kind of thing.

The first target would almost certainly be teachers’ unions, long a target of conservative ire. The possible results of the weakening or abolition of teachers’ unions reads like a grab bag of all the left’s education bogymen: performance based pay (The Rand Corporation found performance based pay to be ineffective, but see also Slate Star Codex), more charter schools (in America), less job security for teachers, and larger class sizes.

Beyond teachers’ unions, there are dozens of ways that conservatives would love to stymie organized labour. It is generally accepted among leftists that unions are good for everyone, even people who aren’t unionized. Therefore, a great decrease in union membership or weakening of union power would lead to a loss of utility from a conventional leftist point of view.

If we could get rid of police unions without significantly risking other unions, then the analysis would probably come up positive (given our other assumptions). Unfortunately, it would probably take laws (and successive legal victories) to force police unions to disband or strip them of collective bargaining rights. There is no way to argue that laws and court cases don’t set precedents. Laws passed by previous governments give future government permission to legislate in the same space. And successful court cases (especially in common law jurisdictions) set the legal precedents that were discussed earlier. Courts cannot support disbanding a police union without setting the general precedent that unions may be forcibly disbanded.

In addition to creating one of the strongest possible precedents, abolishing police unions but demanding that no other unions be affected is a strong case of special pleading. In this specific case, there is even more potential for harm than in most, as a majority of Americans are confident in the police.

Looking at all of this, we’re looking at a potential for an increase in utility (assuming that there isn’t a protest or other work action from police that leads to rising crime rates), while setting a precedent we acknowledge is both strong and dangerous.

From a precedent utilitarian point of view, it seems unlikely that abolishing police unions will actually lead to any increase in utility. Instead, precedent utilitarians might focus on the outcomes they wish to see (increased use of body cameras, better use of force policies, more restrictions on discriminatory policing, funds for hiring more police officers from diverse backgrounds and diverse communities) and try to legislate them individually.

Of these, body cameras and hiring police officers from more diverse backgrounds (which can be spun to constituents as simply as “hiring more police officers”) seem the most likely to be easy to pass with broad support and probably represent the easiest starting place for a quick utility gain.

Small Government Religious Conservative

Several conservative controlled legislatures in America have begun to pass (or attempt to pass) so called “religious freedom restoration acts” or similar bills. These bills are largely designed to pre-empt local non-discrimination ordinances that conservatives feel place individuals into a conflict between their deeply held private beliefs and the law.

As above, I’m going to skip questions about the correctness of these beliefs or that they represent a gain in utility. Just as some people feel that forcing police to de-unionize will lead to a better world, some people feel that these bills will lead to a better world. Instead of disagreeing with these beliefs on the object level, I want to show that they are inconsistent with other conservative axioms and would fall under the class of beliefs that precedent utilitarianism suggests should be reject even if they’re based on correct axioms.

The generalized action here is: “it is acceptable for distant legislators to force lower levels of government to legislate as they would.” If you’re a conservative, I want you to take a moment and imagine what sort of effects there would be if your worst enemy did this kind of thing.

The first target would almost certainly be rural communities in otherwise liberal states, which tend to have much different laws around gun ownership and property taxes than the larger metropolises which make up the majority of the voter base.

Beyond guns and taxes, there are dozens of regulations that central liberal governments would love to impose upon rural conservatives. Look at what’s going on in Alberta for just one example. And would any conservative trust a liberal state government to protect the coal or fracking jobs on which so many rural communities survive? Living in a city, it’s far too easy to forget that these things have to come out of the ground somewhere.

If local bills could be overruled without setting any precedents, maybe there’s a utility gain to be had. But this seems unlikely. It almost certainly will require a few court cases to sort out which level of government has which power and once powers have been taken away from local government and given to the centralized government, they cannot be taken back. Politicians almost never let go of the power that they’ve fought so hard to gain.

In addition to setting strong legal precedents, to claim that overruling municipalities in the way you like is okay while demanding no other government overrules them in ways that you don’t like is definitely special pleading. And as Americans still trust their local representatives more than their state representatives, this is likely to backfire.

Looking at all of this, we’re looking at an increase in utility from state laws overriding local non-discrimination ordinances, while also setting a strong precedent that states can override whatever local laws they don’t like; something we should acknowledge as dangerous and negative.

From a precedent utilitarian point of view, it seems unlikely that overriding local non-discrimination ordinances will lead to any increase in utility. Instead, precedent utilitarians with these axioms should focus on increasing tax breaks for religious schools or other social institutions they believe will push society in the direction they think it should go.

Back to myself: one principle of small government conservatism that I find laudable is the belief that local governments are best placed to fix problems. All too often central planners come up with ridiculous, unworkable ideas out of ignorance of the conditions on the ground. In addition to my grave concerns about the content of “religious freedom ordinances” or “bathroom bills”, I’ve been shocked to see conservatives suddenly advocating for solutions at the state level and liberals claiming that local people know best. And I’m not the only one.

Downsides

I chose one of my examples very deliberately: to emphasize one of the weaknesses of precedent utilitarianism. People who are already privileged (like me!) are going to find it easiest to demand that potential changes must be considered and interrogated for bad precedents and abandoned if there is a chance that they might lead to enough disutility in the future.

It’s easy for me to urge caution around police unions. The police aren’t busy killing people who look like me. It’s easy for me to say that unprincipled exceptions should always be avoided. Unprincipled exceptions aren’t already being made at my expense. It’s reasonable to ask: “if they’re making exceptions for us, how come we can’t make exceptions for them.”

Pointing out that we wouldn’t have these problems if everyone already followed precedent utilitarianism doesn’t count as an argument. So what if it’s true? It wouldn’t change anything. The world should be engaged with as it is, not how we wish it to be. And we have to reckon with the fact that sometimes partially adopting an idea is worse than adopting none of it (see for example most arguments that start: “well, in a perfect libertarian society…”).

But this weakness isn’t unique to precedent utilitarianism. It’s a weakness of utilitarianism or of consequentialism more generally. Most constructions of utilitarianism place no inherent value on fairness, only value on some of the effects of fairness. Instead of trafficking in an ethical coin that is intuitively understood, they deal in cold, hard utility and disutility. Life years saved or lost, pleasure and pain, preferred and dispreferred states, all aggregated over the population of the world. These are the tools utilitarians have.

Precedent utilitarianism demands a deeper examination of consequences than some other constructions of utilitarianism. But it can’t change the fact that consequences are all utilitarians care about.

I advocate for precedent utilitarianism because I think that it doesn’t suffer from the problems of libertarianism. I don’t think even stumbling, imperfect precedent utilitarianism will lead to a worse state than the current one. But I don’t have proof. I can claim some institutions (the courts, liberalism) as obvious manifestations of precedent utilitarianism.

But this leaves two avenues of disagreement. First, you can claim that these are the by-product of something else and only have a serendipitous resemblance to precedent utilitarianism. Or you can claim that these are in fact not good things. It all depends on your axioms.

And this is all circular. People like me in positions of privilege tend to have axioms that assume their experience. Meanwhile, systematically disadvantaged people tend to have axioms that assume their experience.

Here’s what I have to try and convince you, even if there’s a huge gap between our axioms. Scott and Ozy often talk about ethical systems that fail gracefully. Imagine that you thought something or someone was bad and did everything permitted by your ethical system to stop it or them. Now imagine that you were wrong. How badly have you fucked up?

Precedent utilitarianism fails gracefully. Does your ethical system?

Epistemic Status: Ethics

Ethics, Philosophy

Utilitarianism: An Overview

What is a utilitarian?

To answer that question, you have to think about another, namely: “what makes an action right?”

Is it the outcome? The intent? What is a good intent or a good outcome?

Kantian deontologists have pithy slogans like: ” I ought never to act except in such a way that I could also will that my maxim should become a universal law” or “an action is morally right if done for duty and in accordance to duty.

Virtue ethicists have a rich philosophical tradition that dates back (in Western philosophy) to Plato and Aristotle.

And utilitarians have math.

Utilitarianism is a subset of consequentialism. Consequentialism is the belief that only the effects of an action matter. This belief lends itself equally well to selfish and universal ethical systems.

When choosing between two actions, selfish consequentialist (philosophers and ethicists would call such a person an egoist) would say that the morally superior action is the one that brings them the most happiness.

Utilitarians would say that the morally superior option is the one that brings the most ______ to the world/universe/multiverse, where ______ is whatever measure of goodness they’ve chosen. The fact that the world/universe/multiverse is the object of optimization is where the math comes in. It’s often pretty hard to add up any measure of goodness over a set as large as a world/universe/multiverse.

It’s also hard to define goodness in abstract without lapsing into tautology (“how does it represent goodness?” – “well it’s obvious, it’s the best thing!”). Instead of looking at in abstract, it’s helpful to look at utilitarian systems in action.

What quality people choose as their ethical barometer/best measure of the goodness of the world tells you a lot about what they value. Here’s four common ones. As you read them, consider both what implicit values they encode and which ones call out to you.

QALY Utilitarianism

QALY Utilitarianism is most commonly seen in discussions around medical ethics, where QALYs are frequently used to determine the optimal allocation of resources. One QALY represents one year of reasonably healthy and happy life. Any conditions which reduce someone’s enjoyment of life results in those years so blighted being weighed as less than one full QALY.

For example, a year living with asthma is worth 0.9 QALYs. A year with severe seizures is worth 0.7 QALYs.

Let’s say we have a treatment for asthma that cost $1000 and another for epilepsy that costs $1000. If we only have $1000, we should treat the epilepsy (this leads to an increase of 0.3 QALYs, more than the 0.1 QALYs we’d get for treating asthma).

If we have more money, we should treat epilepsy until we run out of epileptic patients, then use the remaining money for asthma.

Things become more complicated if the treatments cost different amounts of money. If it is only $100 to treat asthma, then we should instead prioritize treating asthma, because $1000 of treatment buys us 1 QALY, instead of only 0.3.

Note that QALY utilitarianism (and utilitarianism in general) doesn’t tell us what is right per se. It only gives us a relative ranking of actions. One of those actions may produce the most utility. But that doesn’t necessarily mean that the only right thing to do is constantly pursue the actions that produce the very most utility.

QALY utilitarianism remains most useful in medical science, where researchers have spent a lot of time figuring out the QALY values for many potential conditions. Used with a set of accurate QALY tables, it becomes a powerful way to ensure cost effectiveness in healthcare. QALY utilitarianism is less useful when we lack these tables and therefore remains sparsely used for non-healthcare related decisions.

Hedonistic Utilitarianism

Hedonistic utilitarianism is much more general than QALY utilitarianism, in part because its value function is relatively easy to calculate.

It is almost a tautology to claim that people wish to seek out pleasure and avoid pain. If we see someone happy about an activity we think of us painful, it’s much more likely that we’re incorrectly assessing how pleasurable/painful they find it than it is that they also find the activity painful.

Given how common pleasure-seeking/pain-avoiding is, it’s unsurprising that pleasure has been associated with The [moral] Good and pain with The [moral] Bad at least since the time of Plato and Socrates.

It’s also unsurprising that pleasure and pain can form the basis of utilitarian value functions. This is Hedonistic Utilitarianism and it judges actions based on the amount of net pleasure they cause across all people.

Weighing net pleasure across all people gives us some wiggle room. Repeatedly taking heroin is apparently really, really pleasurable. But it may lead to less pleasure overall if you quickly die from a heroin overdose, leaving behind a bereaved family and preventing all the other pleasure you could have had in your life.

So the hedonistic utilitarianism value function probably doesn’t assign the highest rating to getting everyone in the world blissed out on the most powerful drugs available.

But even ignoring constant drug use, or other descents into purely hedonistic pleasures, hedonistic utilitarianism often frustrates people who hold a higher value on actions that may produce less direct pleasure, but lead to them feeling more satisfied and contented overall. These people are left with two options: they can argue for ever more complicated definitions of pleasure and pain, taking into account the hedonic treadmill and hedonistic paradox, or they can pick another value function.

Preference Utilitarianism

Preference utilitarianism is simple on the surface. Its value function is supposed to track how closely people’s preferences are fulfilled. But there are three big problems with this simple framing.

First, which preferences? I may have the avowed preference to study for a test tomorrow, but once I sit down to study my preference may be revealed to be procrastinating all night. Which preference is more important? Some preference utilitarians say that the true preference is the action you’d pick in hindsight if you were perfectly rational. Others drop the “truly rational” part, but still talk about preferences in terms of what you’d most want in hindsight. Another camp gives credence to the highest level preference over all the others. If I prefer in the moment to procrastinate but would prefer to prefer to want to study, then the meta-preference is the one that counts. And yet another group of people give the most weighting to revealed preferences ­– what you’d actually do in the situation.

It’s basically a personal judgement call as to which of these groups you fall into, a decision which your own interactions with your preferences will heavily shape.

The second problem is even thornier. What do we do when preferences collide? Say my friend and I go out to a restaurant. She may prefer that we each pay for our own meals. I may prefer that she pays for both of our meals. There is no way to satisfy both of our preferences at the same time. Is the most moral outcome assuaging whomever holds their preferences the most strongly? Won’t that just incentivize everyone to hold their preferences as strongly as humanly possible and never cooperate? If enough people hold a preference that a person or a group of people should die, does it provide more utility to kill them than to let them continue living?

One more problem: what do we do with beings that cannot hold preferences? Animals, small children, foetuses, and people in vegetative states are commonly cited as holding no preferences. Does this mean that others may do whatever they want with them? Does it always produce more utility for me to kill any animal I desire to kill, given it has no preferences to balance mine?

All of these questions remain inconclusively answered, leaving each preference utilitarian to decide for herself where she stands on them.

Rule Utilitarianism

The three previous forms of utilitarianism are broadly grouped together (along with many others) under act utilitarianism. But there is another way and a whole other class of value functions. Meet rule utilitarianism.

Rule utilitarians do not compare actions and outcomes directly when calculating utility. Instead they come up with a general set of rules which they believe promotes the most utility generally and judge actions according to how well they satisfy these rules.

Rule utilitarianism is similar to Kantian deontology, but it still has a distinctly consequentialist flavour. It is true that both of these systems result (if followed perfectly) in someone rigidly following a set of rules without making any exceptions. The difference, however, is in the attitude of the individual. Whereas Kant would call an action good only if done for the right reasons, rule utilitarians call actions that follow their rules good regardless of the motivation.

The rules that arise can also look different from Kantian deontology, depending on the beliefs of the person coming up with the rules. If she’s a neo-reactionary who believes that only autocratic states can lead to the common good, she’ll come up with a very different set of rules than Immanuel Kant did.

First Order Utilitarianism?

All of the systems described here are what I’ve taken to calling first order utilitarianism. They only explicitly consider the direct effects of actions, not any follow-on effects that may happen years down the road. Second-order utilitarianism is a topic for another day.

Other Value Functions?

This is just a survey of some of the possible value functions a utilitarian can have. If you’re interested in utilitarianism in principle but feel like all of these value functions are lacking, I encourage you to see what other ones exist out there.

I’m going to be following this post up with a post on precedent utilitarianism, which solved this problem for me.

Epistemic Status: Ethics