[4 minure read]
Total assessment is the direct comparison of all the consequences of different actions. It is not so much a prediction that an individual can make as it is the providence of an omniscient god. If you cannot perfectly predict all of the future, you cannot perform a total assessment. It’s conceptually useful – whenever a utilitarian is backed into a corner, they can fall on total assessment as their decision-making tool – but it’s practically useless.
Absent total assessment, utilitarians kind of have to make their best guess and go with it. Even my beloved precedent utilitarianism isn’t much help here; precedent utilitarianism focuses on a class of consequences that traditional utilitarianism can miss. It does little to help an individual figure out all of the consequences of their actions.
If it is hard to guess the effects of outcomes, or if this guessing will be prohibitive in terms of time, what is the utilitarian to do? One appealing option is a distinctly utilitarian virtue ethics. This virtue ethics would define a good life as one lived with the virtues that cause you to make optimific decisions.
I think it is possible for such a system to maintain a distinctly utilitarian character and thereby avoid Williams’ prediction that utilitarianism must, if accepted, “usher itself from the scene.”
The first distinct characteristic of a utilitarian virtue ethics would be its heterogeneity. Classical virtue ethics holds that there are a set of virtues that can cause one to live a good life. The utilitarian would instead seek to cultivate the virtues that would cause her to act in an optimific way. These would necessarily be individualized; it may very well be optimific for an ambitious and clever utilitarian to cultivate greed and drive while acquiring a fortune, then cultivate charity while giving it away (see Bill Gates).
There is the obvious danger here that cultivating temporarily anti-utilitarian virtues could lead to permanent values drift. The best countermeasure against this would be a varied community of utilitarians, who would cultivate a variety of virtues and help bind each other to the shared utilitarian cause, helping whenever expediency threatens to pull one away from it.
Next, a utilitarian virtue ethics would treat no virtue as sacred. Honesty, charity, kindness, and bravery – all of these must be conditional on the best outcome. Because the best outcome is hard to determine, they might be good rules of thumb, but the utilitarian must always be prepared to break a moral rule if there is more utility to be had.
Third, the utilitarian would seek to avoid cognitive biases and learn to make decisions quickly. Avoiding cognitive biases increases the chance that rules of thumb will be broken out of genuine utilitarian concern, rather than thinly veiled self-interest. Learning to make decisions quickly helps avoid the wasted time pondering “what is the right thing to do?”
While the traditional virtue ethicist might read the works of the great classical philosophers to better understand virtue, a utilitarian virtue ethicist would focus on learning Fermi estimation, Bayesian statistics, and the works of Daniel Kahneman.
The easiest ways for a utilitarian to fail is to treat the world as it really is are by ignoring the things they cannot measure, or by ignoring truths they find personally uncomfortable. We did not evolve for clear thinking and there is always the risk that we will get ourselves turned around, substituting what is best for us with what is best for the world.
One hang-up I have with this idea is that I just described a bunch of my friends in the rationality and effective altruism communities. How likely is it that this is merely self-serving, instead of the natural endpoint of all of the utilitarian philosophy I’ve been reading?
On one hand, this is a community of utilitarians who are similar to me, so convergence in outputs given the same inputs is more or less expected.
On the other, this could be a classic example of seeing the world how I wish it, rather than it is. “Go hang out with people you already like, doing the things you were already going to do” isn’t much of an ethical ask. Given that the world is in a dire state, it makes sense for utilitarians to be sceptical that their ethical system won’t require much from them.
There could be other problems with this proposal, but I’m not sure that I’m the type of person who could see them. For now, this represents my best attempt to reconcile my utilitarian ethics with the realities of the modern world. But I will be careful. Ease is ever seductive.