Socratic Form Microscopy

Utilitarianism: An Overview

by Zach Jacobi in Ethics, Philosophy

What is a utilitarian?

To answer that question, you have to think about another, namely: “what makes an action right?”

Is it the outcome? The intent? What is a good intent or a good outcome?

Kantian deontologists have pithy slogans like: “ I ought never to act except in such a way that I could also will that my maxim should become a universal law” or “an action is morally right if done for duty and in accordance to duty.

Virtue ethicists have a rich philosophical tradition that dates back (in Western philosophy) to Plato and Aristotle.

And utilitarians have math.

Utilitarianism is a subset of consequentialism. Consequentialism is the belief that only the effects of an action matter. This belief lends itself equally well to selfish and universal ethical systems.

When choosing between two actions, selfish consequentialist (philosophers and ethicists would call such a person an egoist) would say that the morally superior action is the one that brings them the most happiness.

Utilitarians would say that the morally superior option is the one that brings the most __ to the world/universe/multiverse, where __ is whatever measure of goodness they’ve chosen. The fact that the world/universe/multiverse is the object of optimization is where the math comes in. It’s often pretty hard to add up any measure of goodness over a set as large as a world/universe/multiverse.

It’s also hard to define goodness in abstract without lapsing into tautology (“how does it represent goodness?” – “well it’s obvious, it’s the best thing!”). Instead of looking at in abstract, it’s helpful to look at utilitarian systems in action.

What quality people choose as their ethical barometer/best measure of the goodness of the world tells you a lot about what they value. Here’s four common ones. As you read them, consider both what implicit values they encode and which ones call out to you.

QALY Utilitarianism

QALY Utilitarianism is most commonly seen in discussions around medical ethics, where QALYs are frequently used to determine the optimal allocation of resources. One QALY represents one year of reasonably healthy and happy life. Any conditions which reduce someone’s enjoyment of life results in those years so blighted being weighed as less than one full QALY.

For example, a year living with asthma is worth 0.9 QALYs. A year with severe seizures is worth 0.7 QALYs.

Let’s say we have a treatment for asthma that cost $1000 and another for epilepsy that costs $1000. If we only have $1000, we should treat the epilepsy (this leads to an increase of 0.3 QALYs, more than the 0.1 QALYs we’d get for treating asthma).

If we have more money, we should treat epilepsy until we run out of epileptic patients, then use the remaining money for asthma.

Things become more complicated if the treatments cost different amounts of money. If it is only $100 to treat asthma, then we should instead prioritize treating asthma, because $1000 of treatment buys us 1 QALY, instead of only 0.3.

Note that QALY utilitarianism (and utilitarianism in general) doesn’t tell us what is right per se. It only gives us a relative ranking of actions. One of those actions may produce the most utility. But that doesn’t necessarily mean that the only right thing to do is constantly pursue the actions that produce the very most utility.

QALY utilitarianism remains most useful in medical science, where researchers have spent a lot of time figuring out the QALY values for many potential conditions. Used with a set of accurate QALY tables, it becomes a powerful way to ensure cost effectiveness in healthcare. QALY utilitarianism is less useful when we lack these tables and therefore remains sparsely used for non-healthcare related decisions.

Hedonistic Utilitarianism

Hedonistic utilitarianism is much more general than QALY utilitarianism, in part because its value function is relatively easy to calculate.

It is almost a tautology to claim that people wish to seek out pleasure and avoid pain. If we see someone happy about an activity we think of us painful, it’s much more likely that we’re incorrectly assessing how pleasurable/painful they find it than it is that they also find the activity painful.

Given how common pleasure-seeking/pain-avoiding is, it’s unsurprising that pleasure has been associated with The [moral] Good and pain with The [moral] Bad at least since the time of Plato and Socrates.

It’s also unsurprising that pleasure and pain can form the basis of utilitarian value functions. This is Hedonistic Utilitarianism and it judges actions based on the amount of net pleasure they cause across all people.

Weighing net pleasure across all people gives us some wiggle room. Repeatedly taking heroin is apparently really, really pleasurable. But it may lead to less pleasure overall if you quickly die from a heroin overdose, leaving behind a bereaved family and preventing all the other pleasure you could have had in your life.

So the hedonistic utilitarianism value function probably doesn’t assign the highest rating to getting everyone in the world blissed out on the most powerful drugs available.

But even ignoring constant drug use, or other descents into purely hedonistic pleasures, hedonistic utilitarianism often frustrates people who hold a higher value on actions that may produce less direct pleasure, but lead to them feeling more satisfied and contented overall. These people are left with two options: they can argue for ever more complicated definitions of pleasure and pain, taking into account the hedonic treadmill and hedonistic paradox, or they can pick another value function.

Preference Utilitarianism

Preference utilitarianism is simple on the surface. Its value function is supposed to track how closely people’s preferences are fulfilled. But there are three big problems with this simple framing.

First, which preferences? I may have the avowed preference to study for a test tomorrow, but once I sit down to study my preference may be revealed to be procrastinating all night. Which preference is more important? Some preference utilitarians say that the true preference is the action you’d pick in hindsight if you were perfectly rational. Others drop the “truly rational” part, but still talk about preferences in terms of what you’d most want in hindsight. Another camp gives credence to the highest level preference over all the others. If I prefer in the moment to procrastinate but would prefer to prefer to want to study, then the meta-preference is the one that counts. And yet another group of people give the most weighting to revealed preferences ­– what you’d actually do in the situation.

It’s basically a personal judgement call as to which of these groups you fall into, a decision which your own interactions with your preferences will heavily shape.

The second problem is even thornier. What do we do when preferences collide? Say my friend and I go out to a restaurant. She may prefer that we each pay for our own meals. I may prefer that she pays for both of our meals. There is no way to satisfy both of our preferences at the same time. Is the most moral outcome assuaging whomever holds their preferences the most strongly? Won’t that just incentivize everyone to hold their preferences as strongly as humanly possible and never cooperate? If enough people hold a preference that a person or a group of people should die, does it provide more utility to kill them than to let them continue living?

One more problem: what do we do with beings that cannot hold preferences? Animals, small children, foetuses, and people in vegetative states are commonly cited as holding no preferences. Does this mean that others may do whatever they want with them? Does it always produce more utility for me to kill any animal I desire to kill, given it has no preferences to balance mine?

All of these questions remain inconclusively answered, leaving each preference utilitarian to decide for herself where she stands on them.

Rule Utilitarianism

The three previous forms of utilitarianism are broadly grouped together (along with many others) under act utilitarianism. But there is another way and a whole other class of value functions. Meet rule utilitarianism.

Rule utilitarians do not compare actions and outcomes directly when calculating utility. Instead they come up with a general set of rules which they believe promotes the most utility generally and judge actions according to how well they satisfy these rules.

Rule utilitarianism is similar to Kantian deontology, but it still has a distinctly consequentialist flavour. It is true that both of these systems result (if followed perfectly) in someone rigidly following a set of rules without making any exceptions. The difference, however, is in the attitude of the individual. Whereas Kant would call an action good only if done for the right reasons, rule utilitarians call actions that follow their rules good regardless of the motivation.

The rules that arise can also look different from Kantian deontology, depending on the beliefs of the person coming up with the rules. If she’s a neo-reactionary who believes that only autocratic states can lead to the common good, she’ll come up with a very different set of rules than Immanuel Kant did.

First Order Utilitarianism?

All of the systems described here are what I’ve taken to calling first order utilitarianism. They only explicitly consider the direct effects of actions, not any follow-on effects that may happen years down the road. Second-order utilitarianism is a topic for another day.

Other Value Functions?

This is just a survey of some of the possible value functions a utilitarian can have. If you’re interested in utilitarianism in principle but feel like all of these value functions are lacking, I encourage you to see what other ones exist out there.

I’m going to be following this post up with a post on precedent utilitarianism, which solved this problem for me.


Epistemic Status: Ethics

Tags: ethics, overview, utilitarianism