Ethics, Philosophy

Against Moral Intuitions

[Content Warning: Effective Altruism, the Drowning Child Argument]

I’m a person who sometimes reads about ethics. I blame Catholicism. In Catholic school, you have to take a series of religion courses. The first two are boring. Jesus loves you, is your friend, etc. Thanks school. I got that from going to church all my life. But the later religion classes were some of the most useful courses I’ve taken. Ever. The first was world religions. Thanks to that course, “how do you know that about [my religion]?” is a thing I’ve heard many times.

The second course was about ethics, biblical analysis, and apologetics. The ethics part hit me the hardest. I’d always loved systematizing and here I was exposed to Very Important Philosophy People engaged in the millennia long project of systematizing fundamental questions of right and wrong under awesome sounding names, like “utilitarianism” and “deontology”.

In the class, we learned commonly understood pitfalls of ethical systems, like that Kantians have to tell the truth to axe murderers and that utilitarians like to push fat people in front of trains. This introduced me to the idea of philosophical thought experiments.

I’ve learned (and wrote) a lot more about ethics since those days and I’ve read through a lot of thought experiments. When it comes to ethics, there seems to be two ways a thought experiment can go; it can show that an ethical system conflicts with our moral intuitions, or it can show that an ethical system fails to universalize.

Take the common criticism of deontology, that the Kantian moral imperative to always tell the truth applies even when you could achieve a much better outcome with a white lie. The thought experiment that goes with this point asks us to imagine a person with an axe intent on murdering our best friend. The axe murderer asks us where our friend can be found and warns us that if we don’t answer, they’ll kill us. Most people would tell the murderer a quick lie, then call the police as soon as they leave. Deontologists say that we must not lie.

Most people have a clear moral intuition about what to do in a situation like that, a moral intuition that clashes with what deontologists suggest we should do. Confronted with this mismatch, many people will leave with a dimmer view of deontology, convinced that it “gets this one wrong”. That uncertainty opens a crack. If deontology requires us to tell the truth even to axe murderers, what else might it get wrong?

The other way to pick a hole in ethical systems is to show that the actions that they recommend don’t universalize (i.e. they’d be bad if everyone did them). This sort of logic is perhaps most familiar to parents of young children, who, when admonishing their sprogs not to steal, frequently point out that they have possessions they cherish, possessions they wouldn’t like stolen from them. This is so successful because most people have an innate sense of fairness; maybe we’d all like it if we could get away with stuff that no one else could, but most of us know we’ll never be able to, so we instead stand up for a world where no one else can get away with the stuff we can’t.

All of the major branches of ethics fall afoul of either universalizability or moral intuitions in some way.

Deontology (doing only things that universalize and doing them with pure motives) and utilitarianism (doing whatever leads to the best outcomes for everyone) both tend to universalize really well. This is helped by the fact that both of these systems treat people as virtually interchangeable; if you are in the same situation as I am, these ethical systems would recommend the same thing for both of us. Unfortunately, both deontology and utilitarianism have well known cases of clashing with moral intuitions.

Egoism (do whatever is in your self-interest) doesn’t really universalize. At some point, your self-interest will come into conflict with the self-interest of other people and you’re going to choose your own.

Virtue ethics (cultivating virtues that will allow you to live a moral life) is more difficult to pin down and I’ll have to use a few examples. On first glance, Virtue ethics tends to fit in well with our moral intuitions and universalizes fairly well. But virtue ethics has as its endpoint virtuous people, not good outcomes, which strikes many people as the wrong thing to aim for.

For example, a utilitarian may consider their obligation to charity to exist as long as poverty does. A virtue ethicist has a duty to charity only insofar as it is necessary to cultivate the virtue of charity; their attempt to cultivate the virtue will run the same course in a mostly equal society and a fantastically unequal one. This trips up the commonly held moral intuition that the worse the problem, the greater our obligation to help.

Virtue ethics may also fail to satisfy our moral intuitions when you consider the societal nature of virtue. In a world where slavery is normalized, virtue ethicists often don’t critique slavery, because their society has no corresponding virtue for fighting against the practice. This isn’t just a hypothetical; Aristotle and Plato, two of the titans of virtue ethics defended slavery in their writings. When you have the meta moral intuition that your moral intuitions might change over time, virtue ethics can feel subtly off to you. “What virtues are we currently missing?” you may ask yourself, or “how will the future judge those considered virtuous today?”. In many cases, the answers to these questions are “many” and “poorly”. See the opposition to ending slavery, opposition to interracial marriage, and opposition to same-sex marriage for salient examples.

It was so hard for me to attack virtue ethics with moral intuitions because virtue ethics is remarkably well suited for them. This shouldn’t be too surprising. Virtue ethics and moral intuitions arose in similar circumstances – small, closely knit, and homogenous groups of humans with very limited ability to affect their environment or effect change at a distance.

Virtue ethics work best when dealing with small groups of people where everyone is mutually known. When you cannot help someone half a world away, it really only does matter that you have the virtue of charity developed such that a neighbour can ask for your help and receive it. Most virtue ethicists would agree that there is virtue in being humane to animals – after all, cruelty to other animals often shows a penchant for cruelty to humans. But the virtue ethics case against factory farming is weak from the perspective of the end consumer. Factory farming is horrifically cruel. But it is not our cruelty, so it does not impinge on our virtue. We have outsourced this cruelty (and many others) and so can be easily virtuous in our sanitized lives.

Moral intuitions are the same way. I’d like to avoid making any claims about why moral intuitions evolved, but it seems trivially true to say that they exist, that they didn’t face strong negative selection pressure, and that the environment in which they came into being was very different from the modern world.

Because of this, moral intuitions tend to only be activated when we see or hear about something wrong. Eating factory farmed meat does not offend the moral intuitions of most people (including me), because we are well insulated from the horrible cruelty of factory farming. Moral intuitions are also terrible at spurring us to action beyond our own immediate network. From the excellent satirical essay Newtonian Ethics:

Imagine a village of a hundred people somewhere in the Congo. Ninety-nine of these people are malnourished, half-dead of poverty and starvation, oozing from a hundred infected sores easily attributable to the lack of soap and clean water. One of those people is well-off, living in a lovely two-story house with three cars, two laptops, and a wide-screen plasma TV. He refuses to give any money whatsoever to his ninety-nine neighbors, claiming that they’re not his problem. At a distance of ten meters – the distance of his house to the nearest of their hovels – this is monstrous and abominable.

Now imagine that same hundredth person living in New York City, some ten thousand kilometers away. It is no longer monstrous and abominable that he does not help the ninety-nine villagers left in the Congo. Indeed, it is entirely normal; any New Yorker who spared too much thought for the Congo would be thought a bit strange, a bit with-their-head-in-the-clouds, maybe told to stop worrying about nameless Congolese and to start caring more about their friends and family.

If I can get postmodern for a minute, it seems that all ethical systems draw heavily from the time they are conceived. Kant centred his deontological ethics in humanity instead of in God, a shift that makes sense within the context of his time, when God was slowly being removed from the centre of western philosophy. Utilitarianism arose specifically to answer questions around the right things to legislate. Given this, it is unsurprising that it emerged at a time when states were becoming strong enough and centralized enough that their legislation could affect the entire populace.

Both deontology and utilitarianism come into conflict with our moral intuitions, those remnants of a bygone era when we were powerless to help all but the few directly surrounding us. When most people are confronted with a choice between their moral intuitions and an ethical system, they conclude that the ethical system must be flawed. Why?

What causes us to treat ancient, largely unchanging intuitions as infallible and carefully considered ethical systems as full of holes? Why should it be this way and not the other way around?

Let me try and turn your moral intuitions on themselves with a variant of a famous thought experiment. You are on your way to a job interview. You already have a job, but this one pays $7,500 more each year. You take a short-cut to the interview through a disused park. As you cross a bridge over the river that bisects the park, you see a child drowning beneath you. Would you save the child, even if it means you won’t get the job and will have to make due with $7,500 less each year? Or would you let her drown and continue on the way to your interview? Our moral intuitions are clear on this point. It is wrong to let a child die because we wish to more money in our pockets each year.

Can you imagine telling someone about the case in which you don’t save the child? “Yeah, there was a drowning child, but I’ve heard that Acme Corp is a real hard-ass about interviews starting on time, so I just waltzed by her.” People would call you a monster!

Yet your moral intuitions also tell you that you have no duty to prevent the malaria linked deaths of children in Malawi, even you would be saving a child’s life at exactly the same cost. The median Canadian family income is $76,000. If a family making this amount of money donated 10% of their income to the Against Malaria Foundation, they would be able to prevent one death from malaria every year or two. No one calls you monstrous for failing to prevent these deaths, even though the costs and benefits are exactly the same. Ignoring the moral worth of people halfway across the world is practically expected of us and is directly condoned by our distance constrained moral intuitions.

Your moral intuitions don’t know how to cope with a world where you can save a life half the world away with nothing more than money and a well-considered donation. It’s not their fault. They didn’t develop for this. They have no way of dealing with a global community or an interconnected world. But given that, why should you trust the intuitions that aren’t developed for the situation you find yourself in? Why should you trust an evolutionary vestige over elegant and well-argued systems that can gracefully cope with the realities of modern life?

I’ve chosen utilitarianism over my moral intuitions, even when the conclusions are inconvenient or truly terrifying. You can argue with me about what moral intuitions say all you want, but I’m probably not going to listen. I don’t trust moral intuitions anymore. I can’t trust anything that fails to spur people towards the good as often as moral intuitions do.

Utilitarianism says that all lives are equally valuable. It does not say that all lives are equally easy to save. If you want to maximize the good that you do, you should seek out the lives that are cheapest to save and thereby save as many people as possible.

To this end, I’ve taken the “Try Giving” pledge. Last September, I promised to donate 10% of my income to the most effective charities for a year. This September, I’m going to take the full Giving What We Can pledge, making my commitment to donate to the most effective charities permeant.

If utilitarianism appeals to you and you have the means to donate, I’d like to encourage you to do the same.

Epistemic Status: I managed to talk about both post-modernism and evolutionary psychology, so handle with care. Also, Ethics.

Data Science, Politics

Thoughts (and Data) on Charity & Taxes

The other day, I posed a question to my friends on Facebook:

Do you think countries with higher taxes see more charitable donations or fewer charitable donations? What sort of correlation would you expect between the two (weak positive? weak negative? strong positive? strong negative?).

I just crunched some numbers and I’ll post them later. First I want to give people a chance to guess and test their calibration.

I was doing research for a future blog post on libertarianism and wanted to check one of the fundamental assumptions that many libertarians make: in the absence of a government, private charity would provide many of the same social services that are currently provided by the government.

I honestly wasn’t sure what I’d find. But I was curious to see what people would suggest. Answer fell into four main camps:

  1. Charitable giving and support for a welfare state might be caused by the same thing, so there will be a weak positive correlation.
  2. Tax incentives for charitable donations shift the utility of donating, such that people in higher tax countries will donate more, as they get more utility per dollar spent (they get the same good feelings from charity, but also receive a bigger rebate come tax time). People who thought up this mechanism predicted a weak positive correlation.
  3. This whole thing will be hopeless confounded by other variables and no conclusion would survive proper controls.
  4. Libertarians are right. Taxes drain money that would go to private charity, so we should see a strong(ish) negative correlation.

I was surprised (but probably shouldn’t have been) to find that these tracked people’s political views. The more libertarian I thought someone was, the more likely they were to believe in a negative correlation. Meanwhile, people who were really into the welfare state tended to assume that charitable donations and taxes would be correlated.

In order to figure out who was right, I grabbed the most recent World Giving Index and correlated it with data about personal income tax levels (and sales tax levels, just to see what happened).

There are a number of flaws with this analysis. I’m not looking for confounding variables. Like at all. When it comes to things as tied to national character as charity and taxes (and how they interact!), this is a serious error in the analysis. I’m also using pretty poor metrics. It would be best to compare something like average tax rate with charitable donation amount per capita. Unfortunately, I couldn’t find any good repositories of this data and didn’t want to spend the hours it would take to build a really solid database of my own.

I decided to restrict my analysis to OECD countries (minus Turkey, which I was missing data on). You’ll have to take my word that I made this decision before I saw any of the data (it turns out that there is essentially no correlation between income tax rate and percent of people who donate to charity when looking at all countries where I have data for both).

Caveats aside, what did I see?

There was a weak correlation (I’m using a simple Pearson correlation, as implemented by Google sheets here, nothing fancy) between the percentage of a population that engaged in charitable giving and the highest income tax bracket in a country. There was a weaker, negative correlation between sales tax and the percent of a population that engaged in charitable giving, but more than 60% of this came from the anchoring effect of the USA, with its relatively high charitable giving and lack of Federal sales tax. The correlation with income tax rates wasn’t similarly vulnerable to removing the United States (in fact, it jumped up by about 12% when they were removed).

Here’s the graphs. I’ve deliberately omitted trend lines because I’m a strong believer in the constellation test.


All the data available is in a publicly viewable Google Sheet.

I don’t think these data give a particularly clear answer about the likelihood of private charity replacing government sponsored welfare programs in a hypothetical libertarian state. But they do suggest to me that the burden of proof should probably rest on libertarians. These results should make you view any claims that charitable giving is held back by the government with skepticism, but it should by no means prevent you from being convinced by good evidence.

I am happy to see that my results largely line up with better academic studies (as reported by the WSJ). It seems that if we look at the past few decades, decreasing the tax rates in the highest income brackets have been associated with decreasing charitable giving, at least in the United States. Whether this represents a correlated increase in selfishness, or fewer individuals donating as the utility of donating decreases is difficult to know.

The WSJ article also mentions that government grants to a charity reduce private donation by about 75% of the grant amount. I don’t know if this represents donations that are lost entirely, or merely substituted for other (presumably needier) charities. If it’s the first, then this would be strong evidence for the libertarian perspective. If it’s the latter, then it means that many people intuitively understand and accept the key effective altruism concept of “room for more funding“, at least as far as the government is concerned.


Finding good answers to the question of whether private charity would replace government welfare turned out to be harder than I thought. The main problem was the quality of data that is easily available. While it was easy to find statistics good enough for a simple, limited analysis, I wasn’t able to find a convenient table with all of the data I needed. This is where actual researchers have a huge advantage over random people on the internet. They have access to cheap labour in the volumes necessary to find and tabulate high quality data.

I’m very glad I posed the question to my friends before figuring out the answer. It never occurred to me to consider the effect of tax incentives on charitable giving. I’m now of the weakly held opinion that the main way taxes affect charitable donations is by offsetting the costs with rebates. I’m also fascinated by the extent to which people’s guesses tracked their political leanings. This shows that (on my Facebook wall, at least) people hold opinions that are motivated by a genuine desire to see the most effective possible government. Differing axioms and exposure to different data lead to differing conceptions of what this would be, but everyone is ultimately on the same team.

I will try and remember this next time I think someone’s preferred government policy is a terrible idea. It’s probably much more productive to try and figure out why they believe their policy objectives will lead to the best outcomes and arguing about that, rather than slipping into clichéd insults.

I was also reminded that it’s fun and rewarding to spend a few hours doing data analysis (especially when you get the same results as studies that get reported on in the WSJ).