Ethics, Literature, Philosophy

Book Review: Utilitarianism for and against (Part 1)

Utilitarianism for and against is an interesting little book. It’s comprised of back-to-back ~70 page essays, one in favour of utilitarianism and one opposed. As an overview, it’s hard to beat something like this. You don’t have to rely on one scholar to give you her (ostensibly fair and balanced) opinion; you get two articulate philosophers arguing their side as best they can. Fair and balanced is by necessity left as an exercise to the reader (honestly, it always is; here at least it’s explicit).

I’m going to cover the “for” side first. The “against” side will be in later blog post. Both reviews are going to assume that you have some understanding of utilitarianism. If you don’t, go read my primer. Or be prepared to Google. I should also mention that I have no aspirations of being balanced myself. I’m a utilitarian; I had much more to disagree with on the “against” side than on the “for” side.

Professor J.J.C Smart makes the arguments in favour of utilitarianism. According to his Wikipedia entry, he was known for “outsmarting” his opponents, that is to say, accepting the conclusions of their reductio ad absurdum arguments with nary a shrug. He was, I’ve gathered, not one for moral intuitions. His criticism of rule utilitarianism played a role in its decline and he was influential in raising the next crop of Australian utilitarians, among whom Peter Singer is counted. As near as I can tell, he was one of the more notable defenders of utilitarianism when this volume was published in 1971 (although much of his essay dates back a decade earlier).

Smart is emphatically not a rationalist (in the philosophical sense); he writes no “proof of utilitarianism” and denies that such a proof is even possible. Instead, Smart restricts himself to explaining how utilitarianism is an attractive ethical system for anyone possessed of general benevolence. Well, I’ll say “everyone”. The authors of this volume seem to be labouring under the delusion that only men have ethical dilemmas or the need for ethical systems. Neither one of them manages the ethicist’s coup of realizing that women might be viewed as full people at the remove of half a century from their time of writing (such a coup would perhaps have been strong evidence of the superiority of one philosophy over another).

A lot of Smart’s essay consists of showing how various different types of utilitarianism are all the same under the hood. I’ve termed these “collapses”, although “isomorphisms” might be a better term. There are six collapses in all.

The very first collapse put me to mind of the famous adage about ducks. If it walks like a duck, swims like a duck, and quacks like a duck, it is a duck. By the same token, if someone acts exactly how a utilitarian in their position and with their information would act, then it doesn’t matter if they are a utilitarian or not. From the point of view of an ethical system that cares only about consequences they may as well be.

The next collapse deals with rule utilitarianism and may have a lot to do with its philosophical collapse. Smart points out that if you are avoiding “rule worship”, then you will face a quandary when you could break a rule in such a way as to gain more utility. Rule utilitarians sometimes claim that you just need rules with lots of exceptions and special cases. Smart points out that if you carry this through to its logical conclusion, you really are only left with one rule, the meta-rule of “maximize expected utility”. In this way, rule utilitarianism collapses into act utilitarianism.

Next into the compactor is the difference between ideal and hedonic utilitarians. Briefly, ideal utilitarians hold that some states of mind are inherently valuable (in a utilitarian sense), even if they aren’t particularly pleasant from the inside. “Better Socrates dissatisfied than a fool satisfied” is the rallying cry of ideal utilitarians. Hedonic utilitarians have no terminal values beyond happiness; they would gladly let almost the entirety of the human race wirehead.

Smart claims that while these differences are philosophically large, they are practically much less meaningful. Here Smart introduces the idea of the fecundity of a pleasure. A doctor taking joy (or grim satisfaction) in saving a life is a much more fecund pleasure than a gambler’s excitement at a good throw, because it brings about greater joy once you take into account everyone around the actor. Many of the other pleasures (like writing or other intellectual pursuits) that ideal utilitarians value are similarly fecund. They either lead to abatement of suffering (the intellectual pursuits of scientists) or to many people’s pleasure (the labour of the poet). Taking into account fecundity, it was better for Smart to write this essay than to wirehead himself, because many other people – like me – get to enjoy his writing and have fun thinking over the thorny issues he raises.

Smart could have stood to examine at greater length just why ideal utilitarians value the things they do. I think there’s a decent case to be made that societies figure out ways to value certain (likely fecund) pleasures all on their own, no philosophers required. It is not, I think, that ideal utilitarians have stumbled onto certain higher pleasures that they should coax their societies into valuing. Instead, their societies have inculcated them with a set of valued activities, which, due to cultural evolution, happen to line up well with fecund pleasures. This is why it feels difficult to argue with the list of pleasures ideal utilitarians proffer; it’s not that they’ve stumbled onto deep philosophical truths via reason alone, it’s that we have the same inculcations they do.

Beyond simple fecundity though, there is the fact that the choice between Socrates dissatisfied and a fool satisfied rarely comes up. Smart has a great line about this:

But even the most avid television addict probably enjoys solving practical problems connected with his car, his furniture, or his garden. However unintellectual he might be, he would certainly resist the suggestion the he should, if it were possible, change places with a contented sheep, or even a happy and lively dog.

This boils down to: ‘ideal utilitarians assume they’re a lot better than everyone else, what with their “philosophical pursuits”, but most people don’t want purely mindless pleasures’. Combined, these ideas of fecundity and hidden depths, point to a vanishingly small gap between ideal and hedonistic utilitarians, especially compared to the gap between utilitarians and practitioners of other ethical systems.

After dealing with questions of how highly we should weigh some pleasures, Smart turns to address the idea of some pleasures not counting at all. Take, for example, the pleasure that a sadist takes in torturing a victim. Should we count this pleasure in our utilitarian moral calculus? Smart says yes, for reasons that again boil down to “certain pleasures being viewed as bad are an artifact of culture; no pleasure is intrinsically bad.

(Note however that this isn’t the same thing as Smart condoning the torture. He would say that the torture is wrong because the pleasure the sadist gains from it cannot make up for the distress of the victim. Given that no one has ever found a real live utility monster, this seems a safe position to take.)

In service of this, Smart presents a thought experiment. Imagine a barren universe inhabited by a single sentient being. This sentient being wrongly believes that there are many other inhabitants of the universe being gruesomely tortured and takes great pleasure in this thought. Would the universe be better if the being didn’t derive pleasure from her misapprehension?

The answer here for both Smart and me is no (although I suspect many might disagree with us). Smart reasons (almost tautologically) that since there is no one for this being to hurt, her predilection for torture can’t hurt anyone. We are rightfully wary of people who unselfconsciously enjoy the thought of innocents being tortured because of what it says about what their hobbies might be. But if they cannot hurt anyone, their obsession is literally harmless. This bleak world would not be better served by its single sentient inhabitant quailing at the thought of the imaginary torture.

Of course, there’s a wide gap between the inhabitant curled up in a ball mourning the torture she wrongly believes to be ongoing and her simple ambivalence to it. It seems plausible that many people could consider her ambivalence preferable, even if they did not wish her to be sad. But imagine then the difference being between her lonely and bored and her satisfied and happy (leaving aside for a moment the torture). It is clear here which is the better universe. Given a way to move from the universe with a single bored being to the one with a single fulfilled being, shouldn’t we take it, given that the shift most literally harms no one?

This brings us to the distinction between intrinsically bad pleasures and extrinsically bad pleasures – the flip side of the intrinsically more valuable states of mind of the ideal utilitarian. Intrinsically bad pleasures are pleasures that for some rationalist or metaphysical reason are just wrong. Their rightness or wrongness must of course be vulnerable to attacks on the underlying logic or theology, but I can hardly embark on a survey of common objections to all the common underpinnings; I haven’t the time. But many people have undertaken those critiques and many will in the future, making a belief in intrinsically bad pleasures a most unstable place to stand.

Extrinsically bad pleasures seem like a much safer proposition (and much more convenient to the utilitarian who wishes to keep their ethical system free of meta-physical or meta-ethical baggage). To say that a pleasure is extrinsically bad is simply to say that to enjoy it causes so much misery that it will practically never be moral to experience it. Similar to how I described ideal utilitarian values as heavily culturally influenced, I can’t help but feel that seeing some pleasures as intrinsically bad has to be the result of some cultural conditioning.

If we can accept that certain pleasures are not intrinsically good or ill, but that many pleasures that are thought of as intrinsically good or ill are thought so because of long cultural experience – positive or negative – with the consequences of seeking them out, then we should see the position of utilitarians who believe that some pleasures cannot be counted in the plus column collapse to approximately the same as those who hold that they can, even if neither accepts the position of the other. The utilitarian who refuses to believe in intrinsically bad pleasures should still condemn most of the same actions as one who does, because she knows that these pleasures will be outweighed by the pains they inflict on others (like the pain of the torture victim overwhelming the joy of the torturer).

There is a further advantage to holding that pleasures cannot be intrinsically wrong. If we accept the post-modernists adage that knowledge is created culturally, we will remember to be skeptical of the universality of our knowledge. That is to say, if you hold a list of intrinsically bad pleasures, it will probably not be an exhaustive list and there may be pleasures whose ill-effects you overlook because you are culturally conditioned to overlook them. A more thoughtful utilitarian who doesn’t take the short-cut of deeming some pleasures intrinsically bad can catch these consequences and correctly advocate against these ultimately wrong actions.

The penultimate collapse is perhaps the least well supported by arguments. In a scant page, Smart addresses the differences between total and average happiness in a most unsatisfactory fashion. He asks which of two universes you might prefer: one with one million happy, healthy people, or one with twice as many people, equally happy and healthy. Both Smart and I feel drawn to the larger universe, but he has no arguments for people who prefer the smaller. Smart skips over the difficulties here with an airy statement of “often the best way to increase the average happiness is to increase the total happiness and vice versa”.

I’m not entirely sure this statement is true. How would one go about proving it?

Certainly, average happiness seems to miss out on the (to me) obvious good that you’d get if you could have twice as many happy people (which is clearly one case where they give different answers), but like Smart, I have trouble coming up with a persuasive argument why that is obviously good.

I do have one important thing myself to say about the difference between average and total happiness. When I imagine a world with more people who are on average less happy than the people that currently exist (but collectively experience a greater total happiness) I feel an internal flinch.

Unfortunately for my moral intuitions, I feel the exact same flinch when I image a world with many fewer people, who are on average transcendentally happy. We can fiddle with the math to make this scenario come out to have greater average and total happiness than the current world. Doesn’t matter. Exact same flinch.

This leads me to believe that my moral intuitions have a strong status quo bias. The presence of a status quo bias in itself isn’t an argument for either total or average utilitarianism, but it is a reminder to be intensely skeptical of our response to thought experiments that involve changing the status quo and even to be wary of the order that options are presented in.

The final collapse Smart introduces is that between regular utilitarians and negative utilitarians. Negative utilitarians believe that only suffering is morally relevant and that the most important moral actions are those that have the consequence of reducing suffering. Smart points out that you can raise both the total and average happiness of a population by reducing suffering and furthermore that there is widespread agreement on what reduces suffering. So Smart expects utilitarians of all kinds (including negative) to primarily focus on reducing suffering anyway. Basically, despite the profound philosophical differences between regular and negative utilitarians, we should expect them to behave equivalently. Which, by the very first collapse (if it walks like a duck…), shows that we can treat them as philosophical equivalents, at least in the present world.

In my experience, this is more or less true. Many of the negative utilitarians I am aware of mainly exercise their ethics by donating 10% of their income to GiveWell’s most effective charities. The regular utilitarians… do the exact same. Quack.

As far as I can tell, Smart goes to all this work to show how many forms of utilitarianism collapse together so that he can present a system that isn’t at war with itself. Being able to portray utilitarianism as a simple, unified system (despite the many ways of doing it) heads off many simple criticisms.

While I doubt many people avoided utilitarianism because there are lingering questions about total versus average happiness, per se, these little things add up. Saying “yes, there are a bunch of little implementation details that aren’t agreed upon” is a bad start to an ethical system, unless you can immediately follow it up with “but here’s fifty pages of why that doesn’t matter and you can just do what comes naturally to you (under the aegis of utilitarianism)”.

Let’s talk a bit about what comes naturally to people outside the context of different forms of utilitarianism. No one, not even Smart, sits down and does utilitarian calculus before making every little decision. For most tasks, we can ignore the ethical considerations (e.g. there is broad, although probably not universal agreement that there aren’t hidden moral dimensions to opening a door). For some others, our instincts are good enough. Should you thank the woman at the grocery store checkout? You probably will automatically, without pausing to consider if it will increase the total (or average) happiness of the world.

Like in the case of thanking random service industry workers, there are a variety of cases where we actually have pretty good rules of thumb. These rules of thumbs serve two purposes. First, they allow us to avoid spending all of our time contemplating if our actions are right or wrong, freeing us to actually act. Second, they protect us from doing bad things out of pettiness or venality. If you have a strong rule of thumb that violence is an inappropriate response to speech you disagree with, you’re less likely to talk yourself into punching an odious speaker in the face when confronted with them.

It’s obviously important to pick the right heuristics. You want to pick the ones that most often lead towards the right outcomes.

I say “heuristics” and “rules of thumbs” because the thing about utilitarians and rules is that they always have to be prepared to break them. Rules exist for the common cases. Utilitarians have to be on guard for the uncommon cases, the ones where breaking a rule leads to greater good overall. Having a “don’t cause people to die” rule is all well and good. But you need to be prepared to break it if you can only stop mass death from a runaway trolley by pushing an appropriately sized person in front of it.

Smart seems to think that utilitarianism only comes up for deliberative actions, where you take the time to think about them and that it shouldn’t necessarily cover your habits. This seems like an abrogation to me. Shouldn’t a clever utilitarian, realizing that she only uses utilitarianism for big decisions spend some time training her reflexes to more often give the correct utilitarian solution, while also training herself to be more careful of her rules of thumb and think ethically more often? Smart gave no indication that he thinks this is the case.

The discussion of rules gives Smart the opportunity to introduce a utilitarian vocabulary. An action is right if it is the one that maximizes expected happiness (crucially, this is a summation across many probabilities and isn’t necessarily the action that will maximize the chance of the happiest outcome) and wrong otherwise. An action is rational if a logical being in possession of all the information you possess would think you to be right if you did it. All other actions are irrational. A rule of thumb, disposition, or action is good if it tends to lead to the right outcomes and bad if it tends to lead to the wrong ones.

This vocabulary becomes important when Smart talks about praise, which he believes is an important utilitarian concern in its own right. Praise increases people’s propensity towards certain actions or dispositions, so Smart believes a utilitarian aught to consider if the world would be better served by more of the same before she praises anything. This leads to Smart suggesting that utilitarians should praise actions that are good or rational even if they aren’t right.

It also implies that utilitarians doing the right thing must be open to criticism if it requires bad actions. One example Smart gives is a utilitarian Frenchman cheating on wartime rationing in 1940s England. The Frenchman knows that the Brits are too patriotic to cheat, so his action (and the actions of the few others that cheat) will probably fall below the threshold for causing any real harm, while making him (and the other cheaters) happier. The calculus comes out positive and the Frenchman believes it to be the right action. Smart acknowledges that this logic is correct, but he points out that by the similar logic, the Frenchman should agree that he must be severely punished if caught, so as to discourage others from doing the same thing.

This actually reminds me of something Hannah Arendt brushed up against in Eichmann in Jerusalem while talking about how the moral constraints on people are different than the ones on states. She gives the example of Soghomon Tehlirian, the Armenian exile who assassinated one of the triumvirate of Turkish generals responsible for the Armenian genocide. Arendt believes that it would have been wrong for the Armenian government to assassinate the general (had one even existed at the time), but that it was right for a private citizen to do the deed, especially given that Tehlirian did not seek to hide his crimes or resist arrest.

From a utilitarian point of view, the argument would go something like this: political assassinations are bad, in that they tend to cause upheaval, chaos, and ultimately suffering. On the other hand, there are some leaders who the world would clearly be better off without, if not to stop their ill deeds in their tracks, then to strike fear and moderation into the hearts of similar leaders.

Were the government of any country to carry out these assassinations, it would undermine the government’s ability to police murder. But when a private individual does the deed and then immediately gives herself up into the waiting arms of justice, the utility of the world is increased. If she has erred in picking her target and no one finds the assassination justified, then she will be promptly punished, disincentivizing copy-cats. If instead, like Tehlirian, she is found not guilty, it will only be because the crimes committed by the leader she assassinated were so brutal and clear that no reasonable person could countenance them. This too sends a signal.

That said, I think Smart takes his distinctions between right and good a bit too far. He cautions against trying to change the non-utilitarian morality of anyone who already tends towards good actions, because this might fail half-way, weakening their morality without instilling a new one. Likewise, he is skeptical of any attempt to change the traditions of a society.

This feels too much like trying to have your cake and eat it too. Utilitarianism can be criticized because it is an evangelical ethical system that gives results far from moral intuitions in some cases. From a utilitarian point of view, it is fairly clearly good to have more utilitarians willing to hoover up these counter-intuitive sources of utility. If all you care about are the ends, you want more people to care about the best ends!

If the best way to achieve utilitarian ends wasn’t through utilitarianism, then we’re left with a self-defeating moral system. In trying to defend utilitarianism from the weak critique that it is pushy and evangelical, both in ways that are repugnant to all who engage in cultural or individual ethical relativism and in ways that are repugnant to some moral intuitions, Smart opens it up to the much stronger critique that it is incoherent!

Smart by turns seems to seek to rescue some commonly held moral truths when they conflict with utilitarianism while rejecting others that seem no less contradictory. I can hardly say that he seems keen to show utilitarianism is in fact in harmony with how people normally act – he clearly isn’t. But he also doesn’t always go all (or even part of) the way in choosing utilitarianism over moral intuitions

Near the end of the book, when talking about a thought experiment introduced by one McCloskey, Smart admits that the only utilitarian action is to frame and execute an innocent man, thereby preventing a riot. McCloskey anticipated him, saying: “But as far as I know, only J.J.C. Smart among the contemporary utilitarians is happy to adopt this ‘solution'”.

Smart responds:

Here I must lodge a mild protest. McCloskey’s use of the work ‘happy’ surely makes me look a most reprehensible person. Even in my most utilitarian moods, I am not happy about this consequence of utilitarianism… since any injustice causes misery and so can be justified only as the lesser of two evils, the fewer the situation in which the utilitarian is forced to choose the lesser of two evils, the better he will be pleased.

This is also the man who said (much as I have) that “admittedly utilitarianism does have consequences which are incompatible with the common moral consciousness, but I tended to take the view ‘so much the worse for the common moral consciousness’.”

All this leaves me baffled. Why the strange mixture? Sometimes Smart goes far further than it seems any of his contemporaries would have. Other times, he stops short of what seems to me the truly utilitarian solution.

On the criticism that utilitarianism compels us always in moral action, leaving us no time to relax, he offers two responses. The first is that perhaps people are too unwilling to act and would be better served by being more spurred on. The second is that it may be that relaxing today allows us to do ten times the good tomorrow.

(Personally, I expect the answer is both. Many people could do more than they currently do, while many others risk burnout unless they relax more. There is a reason the law of equal and opposite advice exists. Different people need to hear different things.)

But take this and his support for rules of thumb on one side and his support for executing the innocent man, or long spiel on how a bunch of people wireheading wouldn’t be that bad (a spiel that convinced me, I might add) and I’m left with an unclear overall picture. As an all-is-fine defence of utilitarianism, it doesn’t go far enough. As a bracing lecture about our degenerate non-utilitarian ways, it also doesn’t go far enough.

Leaving, I suppose, the sincere views of a man who pondered utilitarianism for much longer than I have. Chance is the only reason that makes sense. This would imply that sometimes Smart gives a nod to traditional morality because he’s decided it aligns with his utilitarian ethics. Other times, he disagrees. At length. Maybe Smart is a man seeking to rescue what precious moral truths he can from the house fire that is utilitarianism.

Perhaps some of my confusion comes from another confusion, one that seems to have subtly infected many utilitarians. Smart is careful to point out that the atomic belief underlying utilitarianism is general benevolence. Benevolence, note, is not altruism. The individual utilitarian matters just as much – or as little – as everyone else. Utilitarians in Smart’s framework have no obligation to run themselves ragged for another. Trading your happiness for another’s will only ever be an ethically neutral act to the utilitarian.

Or, I suspect, the wrong one. You are best placed to know yourself and best placed to create happiness for yourself. It makes sense to include some sort of bias towards your own happiness to take this into account. Or, if this feels icky to you, you could handle it at the level of probabilities. You are more likely to make yourself happy than someone else (assuming you’ve put some effort towards understanding what makes you happy). If you are 80% likely to make yourself happy for an evening and 60% likely to make someone else happy, your clear utilitarian duty is to yourself.

This is not a suggestion to go become a hermit. Social interactions are very rarely as zero sum as all that. It might be that the best way to make yourself happy is to go help a friend. Or to go to a party with several people you know. But I have seen people risk burnout (and have risked it myself) by assuming it is wrong to take any time for themselves when they have friends in need.

This is all my own thoughts, not Smart’s. For all of his talk of utilitarianism, he offers little advice on how to make it a practically useful system. All too often, Smart retreats to the idea of measuring the total utility of a society or world. This presents a host of problems and begs two important questions.

First, can utility be accurately quantified? Smart tries to show that different ways of measuring utility should be roughly equivalent in qualitative terms, but it is unclear if this follows at a quantitative level. Stability analysis (where you see how sensitive your result is to different starting assumptions) is an important tool for checking the veracity of conclusions in engineering projects. I have a hunch that quantitatively, utilitarian results to many problems will be highly unstable when a variety of forms of utilitarianism are tried.

Second, how should we deal with utility in the future? Smart claims that beyond a certain point we can ignore side effects (as unintended good side effects should cancel out unintended ill side effects; this is especially important when it comes to things like saving lives) but that doesn’t give us any advice on how we can estimate effects.

We are perhaps saved here by the same collapse that aligned normal utilitarians with negative utilitarians. If we cannot quantify joy, we can sure quantify misery. Doctors can tell you just how much quality of life a disease can sap (there are tables for this), not to mention the chances that a disease might end a life outright. We know the rates of absolute poverty, maternal deaths, and malaria prevalence. There is more than enough misery in the world to go around and certainly utilitarians who focus on ending misery do not seem to be at risk of being out an ethical duty any time in the near future.

(If ending misery is important to you, might I suggest donating a fraction of your monthly income to one of GiveWell’s top recommended charities? These are the charities that most effectively use money to reduce suffering. If you care about maximizing your impact, GiveWell is a good way to do it.)

Although speaking of the future, I find it striking how little utilitarianism has changed in the fifty-six years since Smart first wrote his essay. He pauses to comment on the risk of a recursively self-improving AI and talk about the potential future moral battles over factory farming. I’m part of a utilitarian meme group and these are the same topics people joke about every day. It is unclear if these are topics that utilitarianism predisposes people to care about, or if there was some indirect cultural transmission of these concerns over the intervening years.

There are many more gems – and frustrations in Smart’s essay. I can’t cover them all without writing a pale imitation of his words, so I shan’t try any more. As an introduction to the different types of utilitarianism, this essay was better than any other introduction I’ve read, especially because it shows all of the ways that various utilitarian systems fit together.

As a defense of utilitarianism, it is comprehensive and pragmatic. It doesn’t seek to please everyone and doesn’t seek to prove utilitarianism. It lays out the advantages of utilitarianism clearly, in plain language, and shows how the disadvantages are not as great as might be imagined. I can see it being persuasive to anyone considering utilitarianism, although in this it is hampered by its position as the first essay in the collection. Anyone convinced by it must then read through another seventy pages of arguments against utilitarianism, which will perhaps leave them rather less convinced.

As a work of academic philosophy, it’s interesting. There’s almost no meta-ethics or meta-physics here. This is a defense written entirely on its own, without recourse to underlying frameworks that might be separately undermined. Smart’s insistence on laying out his arguments plainly leaves him little room to retreat (except around average vs. total happiness). I’ve always found this a useful type of writing; even when I don’t agree, the ways that I disagree with clearly articulated theses can be illuminating.

It’s a pleasant read. I’ve had mostly good luck reading academic philosophy. This book wasn’t a struggle to wade through and it contained the occasional amusing turn of phrase. Smart is neither dry lecturer nor frothing polemicizer. One is put almost in the mind of a kindly uncle, patiently explaining his way through a complex, but not needlessly complicated subject. I highly recommend reading it and its companion.

Advice, Literature

Six Steps to a Daily Writing Habit

I identify so strongly as a person who writes daily that I sometimes find myself bowled over by the fact that I haven’t always done it.

Since my first attempt to write a novel (at age 13), I’ve known that I really enjoy writing. The problem was that I could never really get myself to write. I managed the occasional short story for a contest and I pulled off NaNoWriMo when I was 20, but even after that, writing remained something that happened almost at random. Even when I had something I really wanted to write it was a toss-up as to whether I would be able to sit down and get it on a page.

This continued for a while. Up until January 1st, 2015, I had written maybe 100,000 words. Since then, I’ve written something like 650,000. If your first million words suck – as is commonly claimed – then I’m ¾ of the way to writing non-sucking words.

What changed in 2015? I made a New Year’s Resolution to write more. And then, when that began to fall apart a few months later (as almost all New Year’s Resolutions do), I sought out better commitment devices.

Did you read my first paragraph and feel like it describes you? Do you want to stop trying to write and start actually writing? If your brain works like mine, you can use what I’ve learned to skip over (some of) the failing part and go right to the writing every single day part [1].

Step 1: Cultivate Love

I like having completed writing projects to show off as much as the next person, but I also enjoy the act of writing. If you don’t actually enjoy writing, you may have a problem. My techniques are designed to help people (like me) who genuinely enjoy writing once they get going but have trouble forcing themselves to even start.

If you find writing to be a grim chore, but want to enjoy writing so that you can have the social or financial benefits (heh) of writing, then it will be much harder for you to write regularly. If you aren’t sure if this describes you or not, pause and ask yourself: would writing every day still be worth it if no one ever read what I wrote and I never made a single cent off of it? There’s nothing wrong with preferring that people read what you write and preferring to make money off of writing if possible, but it is very helpful if you’re willing to write even without external validation.

Writing (at least partially) for the sake of writing means that you won’t become discouraged if your writing never “takes off”. Almost no one sees success (measured in book deals, blog traffic, or Amazon downloads) right away. So being able to keep going in the face of the world’s utter indifference is a key determinant of how robust your writing habit will be.

If you don’t like writing for its own sake, don’t despair completely. It’s possible you might come to love it if you spend more time on it. As you start to write regularly, try out lots of things and figure out what you like and dislike. It can be hard to tell the difference between not liking writing and not liking the types of writing you’ve done.

For example, I’m a really exploratory writer. I’ve found that I don’t enjoy writing if there’s a strict outline I’m trying to follow or if I’m constrained by something someone else has written. Fanfiction is one of the common ways that new writers develop their skills, but I really dislike writing fanfiction. Realizing this has allowed me to avoid a bunch of writing that I’d find tedious. Tedious writing is a big risk to your ability to write daily, so if you can reasonably avoid it, you should.

Step 2: Start Small

When learning a new skill or acquiring a new habit, it’s really tempting to try and dive right in and do everything at once. I’d like to strongly discourage this sort of thing. If you get overwhelmed right at the start you’re unlikely to keep with it. Sometimes jumping right into the deep end teaches you to swim, sure. But sometimes you drown. Or develop a fear of water.

It isn’t enough to set things up so that you’ll be fine if everything goes as planned. A good starting level is something that won’t be hard even if life gets in the way. Is your starting goal achievable even if you had to work overtime for the next two weeks? If not, consider toning it down a bit.

You should set a measurable, achievable, and atomic goal. In practice, measurable means numeric, so I’d recommend committing to a specific number of words each day or a specific amount of daily time writing. Here Beeminder will be your best friend [2].

Beeminder is a service that helps you bind your future self to your current goals. You set up a goal (like writing 100,000 words) and a desired daily progress (say, 200 words each day) towards that goal. Each day, Beeminder will make sure you’ve made enough progress towards your desired end-state. If you haven’t, Beeminder charges your credit card (you can choose to pay anywhere from $5 to $2430). Fail again and it charges you more (up to a point; you can set your own maximum). In this way, Beeminder can “sting” you into completing your goals.

For the first few months of my writing habit, I tracked my daily words in a notebook. This fell apart during my final exams. I brought in Beeminder at the start of the next month to salvage the habit and it worked like a charm. Beeminder provided me a daily kick in the pants to get writing; it made me unlikely to skip writing out of laziness, tiredness, or lack of a good idea.

Beeminder only works for numeric goals, so there’s the first of the triad I mentioned covered.

Next, your goal should be achievable; something you have no doubt you can do. Not something some idealized, better, or perfect version of you could do. Something you, with all your constraints and flaws are sure you can achieve. Don’t worry about making this too small. Fifty or one hundred words per day is a perfectly adequate starter goal.

Lastly, atomic. Atomic goals can’t be broken down any further. Don’t start by Beeminding blog posts or gods forfend, novels! Pick the smallest unit of writing you can, probably either time or word count, and make your goal about this. When you’re Beeminding words or time, you can’t fail and get discouraged for lack of ideas or “writer’s block” [3]. It’s much better to spend a week writing detailed journals of every day (or even a detailed description of your bedroom) than it is to spend a week not writing because you can’t think of what to write.

My recommended starter goals are either: write 150 words each day or write 15 minutes each day. Both of these are easy to Beeminder and should be easy for most people to achieve.

Step 3: Acquire Confidence

Even with goals that easy, your first few days or weeks may very well be spent just barely meeting them, perhaps as Beeminder breaths down your neck. Writing is like exercise. It’s surprising how hard it can be to do it every day if you’re starting from nothing.

Here’s the start of my very first Beeminder writing goal. You’ll notice that I started slowly, panicked and wrote a lot, then ran into trouble and realized that I needed to tone things down a bit. It wasn’t until almost four months in that I finally hit my stride and started to regularly exceed my goal.

Dip below the yellow and orange line and you pay up. Green data points mean I had at least three safe days before paying Beeminder. Blue data points are two days. Orange is one.

You can see a similar pattern when I started Beeminding fiction:

The trouble at the beginning is growing pains. The trouble around the end of October came from dropping out of graduate school, moving back home, and beginning a job search.

And when I started Beeminding time spent writing:

Those little spurs three data points into the time graph and seven into the fiction one? That’s where I failed to keep up and ended up giving Beeminder money. They call this “derailing”.

It may take a few derailments, but you should eventually find yourself routinely exceeding your starting goal (if you don’t, either this advice doesn’t work well for you, or you set your original goal too high). Be careful of allowing success to ruin your habit; try and write at least X words each day, not X words each day on average over the course of a week.

The number of days before you derail on a goal in Beeminder is called “safety buffer”. For outputs you intend to Beemind daily, I recommend setting yourself up so that you can have no more than two days of safety buffer. This lets you save up some writing for a busy day or two, but doesn’t let you skip a whole week. If you have a premium plan, Beeminder allows you to automatically cap your safety buffer, but you can also do it manually if you’re disciplined (I did this for many months until I could afford a premium plan).

The set-up on my daily writing time goal.

When you get to the point of regularly trimming your safety buffer you’re almost ready to move on up. Once you’re really, really sure you can handle more (i.e. exceeded your minimum every day for two weeks), slowly increase your commitment. You don’t want to get too cocky here. If you’re currently aiming for 150 words/day and 9 days out of 10 you write 250, set your new goal to 200, not 250. You want to feel like you’re successfully and competently meeting your goal, not like you’re scrapping by by the skin of your teeth.

Step 4: Make Molecules

Once you become comfortable with your atomic goals and find stable long term resting spots for them, you can start to Beemind more complex outputs. This is using Beeminder to directly push you towards your goals. Want to work on your blog? Beemind blog posts. Want to work on a book? Beemind pages or chapters or scenes. Want to keep a record of your life? Beemind weekly journals.

These are all complicated outputs made up of many words or minutes of writing. You won’t finish them as regularly. It’s easy to sit down and crank out enough words in an hour to hit most word count goals. But these larger outputs might not be achievable in a single day, especially if you have work or family commitments. That’s why you want your writing habit well established and predictable by the time you take them on.

Remember, you don’t want to set yourself up for failure if it’s at all avoidable. Don’t take on a more complex output as a Beeminder goal until you have a sense of how long it will take you to produce each unit of it and always start at a rate where you’re sure you can deliver. Had a few weeks of finishing one chapter a week? Start your Beeminder goal at one chapter every ten days.

It’s easy to up your Beeminder goal when you find it’s too lenient. It’s really hard to get back into writing after a string of discouragements caused by setting your goals too aggressive.

Even when you manage to meet overambitious goals, you might suffer for it in other ways. I’m not even talking about your social life or general happiness taking a hit (even though those are both very possible). Stretching yourself too thin can make your writing worse!

I had a period where I was Beeminding regularly publishing writing at a rate faster than I was really capable of. I managed to make my goal anyway, but I did it by writing simple, low-risk posts. I shoved aside some of the more complex and rewarding things I was looking forward to writing because I was too stubborn to ease back on my goal. It took me months to realize that I’d messed up and get rid of the over-ambitious goal.

It was only after I dialed everything back and gave myself more space to work that I started producing things I was really proud of again. That period with the overambitious goal stands out as one of the few times since I started writing again where I produced nothing I’m particularly proud of.

Tuning down the publishing goal didn’t even cause me to write less. I didn’t dial back my atomic goals, just my more complicated one, so I was still writing the same amount. When I was ready to begin publishing things I’d written again, I started the goal at a much lower rate. After a few months of consistently exceeding it, I raised the rate.

Here’s what my original goal looked like:

It’s hard to see, but I derailed five times between the end of December 2015 and the start of May 2016. I wasn’t derailing much in March in April, but I also wasn’t writing anything I was proud of. It was a terrible dilemma. Do I write thoughtful posts and lose money? Or do I churn out work I’m not proud of to save me the costs of a derailment? I think now I’d rather the derailments. At least I liked what I was writing when I was derailing.

Here’s my new blogging goal:

No derailments here! I started this at the rate of one post per month and only increased the slope at the end of March after I’d proved to myself that I wouldn’t have any problems keeping up the pace. It was a near thing at the start of May after two weeks of vacation where I’d had less chance to write than I hoped, but it turned out okay. In retrospect, it probably would have been smarter to increase the rate after my vacation, not before.

As you can see, I learned my lesson about over-ambition.

Step 5: Vanquish Guilt

At the same time as you work on Beeminding more complex outputs, you will want to be examining and replacing the guilt based motivation structure you may have built to get there.

Guilt can be a useful motivator to do the bare minimum on a project; guilt (and terror) is largely what got me through university. But guilt is a terrible way to build a long-term habit. If writing is something you do to avoid a creeping guilt, you may start to associate negative feelings with writing; if you started a writing habit because you love writing, then you’re risking that very love if you motivate yourself solely with guilt.

I recommend looking at Beeminder not as a tool to effectively guilt yourself into writing, but as a reminder of what writing is worth to you. You value consistently writing at $X. You know that every time you skip writing for a day or a week, there is a Y% chance that you might lose the habit. Multiply those two together and you get your ideal maximum Beeminder pledge.

$0 pledges are another fancy premium Beeminder feature. It’s only $0 once though. Every time I derail, it goes up to the next highest pledge level. It takes a full week to lower your pledge by one level, so bee careful.

It’s entirely rational to choose to derail on Beeminder if you value something else more than you value writing just then Here Beeminder is helping you make this trade-off explicit. You may know that not writing tonight costs you $Z of estimated future utility (this doesn’t necessarily mean future earnings; it could also represent the how much writing is worth to you as an entertainment), but without Beeminder you wouldn’t be facing it head on. When you can directly compare the utility of two ways to spend your time, you can make better decisions and trade-offs.

That said, it rarely comes to mutual exclusion. Often Beeminder prompts me to find a way to write, even if there’s something else I really want to do that partially conflicts. Things that I might lazily view as mutually exclusive often turn out not to be, once there’s money on the line.

It may seem hard to make this leap, especially when you start out with Beeminder. But after two years of regularly Beeminder use, I can honestly say that it doesn’t guilt me into anything. Even when it forces me to write, the emotional tone isn’t quite guilt. Beeminder is an effective goad because it helps me see the causal chain between writing tonight and having a robust writing habit. I write because I’m proud of the amount I write and I want to keep being proud of it. I’m not spurring myself with guilt and using that negativity to move forward. I’m latching onto the pride I want to be able to feel and navigating towards that.

Mere reminders to write are the least of what I get out Beeminder though. Beeminder became so much more effective for me once I started to regularly surpass my goals. Slowly, I began to be motivated mostly by exceeding them and that motivation led me to exceed them by ever greater margins and enjoy every minute of it.

For more about the perils of guilt as a motivational tool (and some suggestions on how to replace it), check out the replacing guilt sequence on Nate Soare’s blog, Minding Our Way. For a TL;DR, try “Don’t steer with guilt“.

Step 6: Success Spiral

This is the part where everything starts to come together. When you get here, guilt based motivation is but a dim memory. You write because you want to. Beeminder helps keep you on track, but you’re more likely to spend a bit of extra time writing to see the spike in your graphs than you are because you’ll derail otherwise.

When you get to this point (or earlier, depending on how you like to work), something like Complice can really help you make the most of all your motivation. Complice helps you tie your daily actions into the set long- and medium-term goals you’ve set. It has a kickass Beeminder integration that makes Beeminding as easy checking off a box. It has integrated Pomodoro timers for tracking how much time you work (and can send the results to Beeminder). It allows you and a friend to sign up as accountability buddies and see what each other get done [4]. And it shows you how much work you’ve done in the past, allowing you to use the “don’t break the chain” productivity hack if it works for you (it works for me).

The last few weeks of my writing goal in Complice. For two of the days in the pictured period, I only wrote because I couldn’t bear to lose my streak of days where I’d hit my writing goal.

As I finish off this piece, I find myself tired and lethargic. It’s not that I particularly want to be writing (although some of the tiredness fell away as soon as I started to type). It’s that writing every night feels like the default option of my life. As weird as it sounds, it feels like it would take me more effort to skip writing than to do it.

This is really good, because any grumpiness about writing I might start with is often gone in under five minutes. The end result of me writing – even on a day when starting was hard – is improved mood for the whole day. I love the sense of accomplishment that creating something brings.

The road here wasn’t exactly easy. It’s taken more than two and a half years, hundreds of thousands of words, incipient carpal tunnel, and many false starts. It’s the false starts that inspired me to write this. I doubt, dear reader, that you are exactly like me. Likely some of this advice won’t work for you. It is, however, my hope that it can point you in the right direction. Perhaps my false starts can save you some of your own.

I would feel deeply uncomfortable giving anyone advice on how to be a better writer; I don’t feel confident enough in my craft for that [5]. But I do feel like I know how to develop a kickass writing habit, the sort of habit that gives you the practice you need to get better. If you too want to write regularly, how about you give this a try?

Postscript

I think the steps outlined here could be used to help build a variety of durable habits across disciplines. Want to program, cook, draw, or learn a new language? Think that in any of those cases a daily habit would be helpful? This advice is probably transferable to some degree. That said, I haven’t tried to repeat this process for any of those things, so I don’t know what the caveats are or where it will break down. If you adapt this post for anything else, let me know and I’ll link to it here.

Acknowledgements

Thanks to the kind folks at Beeminder for helping me create some of the graphs used in this post. In addition, thanks are due for fielding my semi-panicked support requests when the graph generation caused some problems with my account.

Thanks to Malcolm Ocean of Complice for pointing me towards Beeminder in the first place and for the year in review post that spurred me to make writing my New Year’s Resolution in 2015.

Disclaimer

I genuinely like the people whose products I recommend in this blog post. I genuinely like their creations. They aren’t giving me anything to recommend their stuff.

True story: Beeminder sent out a survey about referral links and I told them they could set up a referral system, but I’d never use it. I think Beeminder and Complice are incredibly valuable tools that are tragically under-used and I don’t want to risk even the appearance of a conflict of interest that might make people less likely to follow my recommendations to use them. For me, they’ve been literally life-changing.

I’ve linked to my specific Beeminder writing goals (there are four of them) at various points throughout this post, but if you want the proof that I’m not talking out of my ass all nicely collected in one place, you can check out my progress towards all of my Beeminder goals at: https://www.beeminder.com/zacharyjacobi.

Footnotes:

[1] If this advice doesn’t work for you, don’t sweat it. I’m just a dude on the internet. This isn’t the bible. What works for me may not work for you and there’s nothing wrong with you if it doesn’t. You’ll just have to find your own way, is all. ^

[2] If Beeminder doesn’t work for you, I recommend a human accountability buddy (who will check up on your writing progress each day and maybe take your money if you aren’t hitting your goals). ^

[3] The best advice about writer’s block I’ve ever seen came from Cory Doctorow. He said that some days he feels like he’s inspired and a heavenly chorus is writing for him and other days he feels like he can’t write worth shit and has no clue what’s he’s supposed to be doing. He goes on to say that no matter how strong these feelings are, a month later he can’t tell the which words were written in which state. ^

[4] I cannot recommend this feature highly enough for people in long-distance relationships. ^

[5] For non-fiction writing advice, try the Slate Star Codex post of the same name. For more general advice, here’s tips from 23 different authors. ^

Science

Science Isn’t Your Cudgel

Do you want to understand how the material world works at the most fundamental level? Great! There’s a tool for that. Or a method. Or a collection of knowledge. “Science” is an amorphous concept, hard to pin down or put into a box. Is science the method of hypothesis generation and testing? Is it as Popper claimed, asking falsifiable questions and trying to refute your own theories? Is it inextricably entangled with the ream of statistical methods that have grown up in service of it? Or is it the body of knowledge that has emerged from the use of all of these intellectual tools?

I’m not sure what exactly science is. Whatever its definition, I feel like it helps me understand the world. Even still I have to remind myself that caring about science is like caring about a partner in a marriage. You need to be with it in good health and in bad, when it confirms things you’ve always wanted to believe, or when your favourite study fails to replicate or is retracted. It’s rank hypocrisy to shout the virtues of science when it confirms your beliefs and denigrate or ignore it when it doesn’t.

Unfortunately, it’s easy to collect examples of people who are selective about their support for science. Here’s three:

  1. Elizabeth May – and many other environmentalists – are really fond of the phrase “the science is clear” when talking about global warming or the dangers of pollution. In this they are entirely correct – the scientific consensus on global warming is incredibly clear. But when Elizabeth May says things like “Nuclear energy power generation has been proven to be harmful to the environment and hazardous to human health“, she isn’t speaking scientifically. Nuclear energy is one of the safest forms of power for both humans and the climate. Elizabeth May (and most of the environmental movement) are only fans of science when it fits with their fantasies of deindustrialization, not when it conflicts with them. See also the conflict between scientists on GMOs and environmentalists on GMOs.
  2. Hillary Clinton (who earned the support of most progressive Americans in the past election) is quite happy to applaud the March For Science and talk about how important science is, but she’s equally happy to peddle junk science (like the implicit association test) on the campaign trail.
  3. Unfortunately, this is a bipartisan phenomenon [1]. So called “race realists” belong on this list as well [2]. Race realists take research about racial variations in IQ (often done in America, with all of its gory history of repression along racial lines) and then claim that it maps directly onto observable racial characterises. Race realists ignore the fact that scientific attempts at racial clustering show strong continuity between populations and find that almost all genetic variance is individual, not between groups [3]. Race realists are fond of saying that people must accept the “unfortunate truth”, but are terrible at accepting that science is at least as unfortunate for their position as it is for blank slatism. The true scientific consensus lies somewhere in-between [4].

In all these cases, we see people who are enthusiastic defenders of “science” as long as the evidence suits the beliefs that they already hold. They are especially excited to use capital-S Science as a cudgel to bludgeon people who disagree with them and shallowly defend the validity of science out of concern for their cudgel. But actually caring about science requires an almost Kierkegaardian act of resignation. You have to give up on your biases, give up on what you want to be true, and accept the consensus of experts.

Caring about science enough to be unwilling to hold beliefs that aren’t supported by evidence is probably not for everyone. I’m not even sure I want it to be for everyone. Mike Alder says of a perfect empiricist:

It must also be said that, although one might much admire a genuine [empiricist] philosopher if such could found, it would be unwise to invite one to a dinner party. Unwilling to discuss anything unless he understood it to a depth that most people never attain on anything, he would be a notably poor conversationalist. We can safely say that he would have no opinions on religion or politics, and his views on sex would tend either to the very theoretical or to the decidedly empirical, thus more or less ruling out discussion on anything of general interest.

Science isn’t all there is. It would be much poorer world if it was. I love literature and video games, silly puns and recursive political jokes. I don’t try and make every statement I utter empirically correct. There’s a lot of value in having people haring off in weird directions or trying speculative modes of thought. And many questions cannot be answered though science.

But dammit, I have standards. This blog has codified epistemic statuses and I try and use them. I make public predictions and keep a record of how I do on them so that people can assess my accuracy as a predictor. I admit it when I’m wrong.

I don’t want to make it seem like you have to go that far to have a non-hypocritical respect for science.  Honestly, looking for a meta-analysis before posting something both factual and potentially controversial will get you 80% of the way there.

Science is more than a march and some funny Facebook memes. I’m glad to see so many people identifying so strongly with science. But for it to mean anything they have to be prepared to do the painful legwork of researching their views and admitting when they’re wrong. I have in the past hoped that loudly trumpeting support for science might be a gateway drug towards a deeper respect for science, but I don’t think I’ve seen any evidence for this. It’s my hope that over the next few years we’ll see more and more of the public facing science community take people to task for shallow support. If we make it low status to be a fair-weather friend of science, will we see more people actually putting in the work to properly support their views with empirical evidence?

This is an experiment I would like to try.

Footnotes

[1] The right, especially the religious right, is less likely to use “science” as a justification for anything, which is the main reason I don’t have complaints about them in this blog post. It is obviously terrible science to pretend that evolution didn’t happen or that global warming isn’t occurring, but it isn’t hypocritical if you don’t otherwise claim to be a fan of science. Crucially, this blog post is more about hypocrisy than bad science per se. ^

[2] My problems with race realists go beyond their questionable scientific claims. I also find them to be followers of a weird and twisted philosophy that equates intelligence with moral worth in a way I find repulsive. ^

[3] Taken together, these are damning for the idea that race can be easily inferred from skin colour. ^

[4] Yes, I know we aren’t supposed to trust Vox when it comes to scientific consensus. But Freddie de Boer backs it up and people I trust who have spent way more time than I have reading about IQ think that Freddie knows his stuff. ^

Ethics, Philosophy

Against Moral Intuitions

[Content Warning: Effective Altruism, the Drowning Child Argument]

I’m a person who sometimes reads about ethics. I blame Catholicism. In Catholic school, you have to take a series of religion courses. The first two are boring. Jesus loves you, is your friend, etc. Thanks school. I got that from going to church all my life. But the later religion classes were some of the most useful courses I’ve taken. Ever. The first was world religions. Thanks to that course, “how do you know that about [my religion]?” is a thing I’ve heard many times.

The second course was about ethics, biblical analysis, and apologetics. The ethics part hit me the hardest. I’d always loved systematizing and here I was exposed to Very Important Philosophy People engaged in the millennia long project of systematizing fundamental questions of right and wrong under awesome sounding names, like “utilitarianism” and “deontology”.

In the class, we learned commonly understood pitfalls of ethical systems, like that Kantians have to tell the truth to axe murderers and that utilitarians like to push fat people in front of trains. This introduced me to the idea of philosophical thought experiments.

I’ve learned (and wrote) a lot more about ethics since those days and I’ve read through a lot of thought experiments. When it comes to ethics, there seems to be two ways a thought experiment can go; it can show that an ethical system conflicts with our moral intuitions, or it can show that an ethical system fails to universalize.

Take the common criticism of deontology, that the Kantian moral imperative to always tell the truth applies even when you could achieve a much better outcome with a white lie. The thought experiment that goes with this point asks us to imagine a person with an axe intent on murdering our best friend. The axe murderer asks us where our friend can be found and warns us that if we don’t answer, they’ll kill us. Most people would tell the murderer a quick lie, then call the police as soon as they leave. Deontologists say that we must not lie.

Most people have a clear moral intuition about what to do in a situation like that, a moral intuition that clashes with what deontologists suggest we should do. Confronted with this mismatch, many people will leave with a dimmer view of deontology, convinced that it “gets this one wrong”. That uncertainty opens a crack. If deontology requires us to tell the truth even to axe murderers, what else might it get wrong?

The other way to pick a hole in ethical systems is to show that the actions that they recommend don’t universalize (i.e. they’d be bad if everyone did them). This sort of logic is perhaps most familiar to parents of young children, who, when admonishing their sprogs not to steal, frequently point out that they have possessions they cherish, possessions they wouldn’t like stolen from them. This is so successful because most people have an innate sense of fairness; maybe we’d all like it if we could get away with stuff that no one else could, but most of us know we’ll never be able to, so we instead stand up for a world where no one else can get away with the stuff we can’t.

All of the major branches of ethics fall afoul of either universalizability or moral intuitions in some way.

Deontology (doing only things that universalize and doing them with pure motives) and utilitarianism (doing whatever leads to the best outcomes for everyone) both tend to universalize really well. This is helped by the fact that both of these systems treat people as virtually interchangeable; if you are in the same situation as I am, these ethical systems would recommend the same thing for both of us. Unfortunately, both deontology and utilitarianism have well known cases of clashing with moral intuitions.

Egoism (do whatever is in your self-interest) doesn’t really universalize. At some point, your self-interest will come into conflict with the self-interest of other people and you’re going to choose your own.

Virtue ethics (cultivating virtues that will allow you to live a moral life) is more difficult to pin down and I’ll have to use a few examples. On first glance, Virtue ethics tends to fit in well with our moral intuitions and universalizes fairly well. But virtue ethics has as its endpoint virtuous people, not good outcomes, which strikes many people as the wrong thing to aim for.

For example, a utilitarian may consider their obligation to charity to exist as long as poverty does. A virtue ethicist has a duty to charity only insofar as it is necessary to cultivate the virtue of charity; their attempt to cultivate the virtue will run the same course in a mostly equal society and a fantastically unequal one. This trips up the commonly held moral intuition that the worse the problem, the greater our obligation to help.

Virtue ethics may also fail to satisfy our moral intuitions when you consider the societal nature of virtue. In a world where slavery is normalized, virtue ethicists often don’t critique slavery, because their society has no corresponding virtue for fighting against the practice. This isn’t just a hypothetical; Aristotle and Plato, two of the titans of virtue ethics defended slavery in their writings. When you have the meta moral intuition that your moral intuitions might change over time, virtue ethics can feel subtly off to you. “What virtues are we currently missing?” you may ask yourself, or “how will the future judge those considered virtuous today?”. In many cases, the answers to these questions are “many” and “poorly”. See the opposition to ending slavery, opposition to interracial marriage, and opposition to same-sex marriage for salient examples.

It was so hard for me to attack virtue ethics with moral intuitions because virtue ethics is remarkably well suited for them. This shouldn’t be too surprising. Virtue ethics and moral intuitions arose in similar circumstances – small, closely knit, and homogenous groups of humans with very limited ability to affect their environment or effect change at a distance.

Virtue ethics work best when dealing with small groups of people where everyone is mutually known. When you cannot help someone half a world away, it really only does matter that you have the virtue of charity developed such that a neighbour can ask for your help and receive it. Most virtue ethicists would agree that there is virtue in being humane to animals – after all, cruelty to other animals often shows a penchant for cruelty to humans. But the virtue ethics case against factory farming is weak from the perspective of the end consumer. Factory farming is horrifically cruel. But it is not our cruelty, so it does not impinge on our virtue. We have outsourced this cruelty (and many others) and so can be easily virtuous in our sanitized lives.

Moral intuitions are the same way. I’d like to avoid making any claims about why moral intuitions evolved, but it seems trivially true to say that they exist, that they didn’t face strong negative selection pressure, and that the environment in which they came into being was very different from the modern world.

Because of this, moral intuitions tend to only be activated when we see or hear about something wrong. Eating factory farmed meat does not offend the moral intuitions of most people (including me), because we are well insulated from the horrible cruelty of factory farming. Moral intuitions are also terrible at spurring us to action beyond our own immediate network. From the excellent satirical essay Newtonian Ethics:

Imagine a village of a hundred people somewhere in the Congo. Ninety-nine of these people are malnourished, half-dead of poverty and starvation, oozing from a hundred infected sores easily attributable to the lack of soap and clean water. One of those people is well-off, living in a lovely two-story house with three cars, two laptops, and a wide-screen plasma TV. He refuses to give any money whatsoever to his ninety-nine neighbors, claiming that they’re not his problem. At a distance of ten meters – the distance of his house to the nearest of their hovels – this is monstrous and abominable.

Now imagine that same hundredth person living in New York City, some ten thousand kilometers away. It is no longer monstrous and abominable that he does not help the ninety-nine villagers left in the Congo. Indeed, it is entirely normal; any New Yorker who spared too much thought for the Congo would be thought a bit strange, a bit with-their-head-in-the-clouds, maybe told to stop worrying about nameless Congolese and to start caring more about their friends and family.

If I can get postmodern for a minute, it seems that all ethical systems draw heavily from the time they are conceived. Kant centred his deontological ethics in humanity instead of in God, a shift that makes sense within the context of his time, when God was slowly being removed from the centre of western philosophy. Utilitarianism arose specifically to answer questions around the right things to legislate. Given this, it is unsurprising that it emerged at a time when states were becoming strong enough and centralized enough that their legislation could affect the entire populace.

Both deontology and utilitarianism come into conflict with our moral intuitions, those remnants of a bygone era when we were powerless to help all but the few directly surrounding us. When most people are confronted with a choice between their moral intuitions and an ethical system, they conclude that the ethical system must be flawed. Why?

What causes us to treat ancient, largely unchanging intuitions as infallible and carefully considered ethical systems as full of holes? Why should it be this way and not the other way around?

Let me try and turn your moral intuitions on themselves with a variant of a famous thought experiment. You are on your way to a job interview. You already have a job, but this one pays $7,500 more each year. You take a short-cut to the interview through a disused park. As you cross a bridge over the river that bisects the park, you see a child drowning beneath you. Would you save the child, even if it means you won’t get the job and will have to make due with $7,500 less each year? Or would you let her drown and continue on the way to your interview? Our moral intuitions are clear on this point. It is wrong to let a child die because we wish to more money in our pockets each year.

Can you imagine telling someone about the case in which you don’t save the child? “Yeah, there was a drowning child, but I’ve heard that Acme Corp is a real hard-ass about interviews starting on time, so I just waltzed by her.” People would call you a monster!

Yet your moral intuitions also tell you that you have no duty to prevent the malaria linked deaths of children in Malawi, even you would be saving a child’s life at exactly the same cost. The median Canadian family income is $76,000. If a family making this amount of money donated 10% of their income to the Against Malaria Foundation, they would be able to prevent one death from malaria every year or two. No one calls you monstrous for failing to prevent these deaths, even though the costs and benefits are exactly the same. Ignoring the moral worth of people halfway across the world is practically expected of us and is directly condoned by our distance constrained moral intuitions.

Your moral intuitions don’t know how to cope with a world where you can save a life half the world away with nothing more than money and a well-considered donation. It’s not their fault. They didn’t develop for this. They have no way of dealing with a global community or an interconnected world. But given that, why should you trust the intuitions that aren’t developed for the situation you find yourself in? Why should you trust an evolutionary vestige over elegant and well-argued systems that can gracefully cope with the realities of modern life?

I’ve chosen utilitarianism over my moral intuitions, even when the conclusions are inconvenient or truly terrifying. You can argue with me about what moral intuitions say all you want, but I’m probably not going to listen. I don’t trust moral intuitions anymore. I can’t trust anything that fails to spur people towards the good as often as moral intuitions do.

Utilitarianism says that all lives are equally valuable. It does not say that all lives are equally easy to save. If you want to maximize the good that you do, you should seek out the lives that are cheapest to save and thereby save as many people as possible.

To this end, I’ve taken the “Try Giving” pledge. Last September, I promised to donate 10% of my income to the most effective charities for a year. This September, I’m going to take the full Giving What We Can pledge, making my commitment to donate to the most effective charities permeant.

If utilitarianism appeals to you and you have the means to donate, I’d like to encourage you to do the same.

Epistemic Status: I managed to talk about both post-modernism and evolutionary psychology, so handle with care. Also, Ethics.

Economics, Politics

Whose Minimum Wage?

[Epistemic Status: I am not an economist, but…]

ETA (October 2018): Preliminary studies from Seattle make me much more pessimistic about the effects of the Ontario minimum wage hike. I’d also like to highlight the potential for problems when linking a minimum wage to inflation.

There’s something missing from the discussion about the $15/hour minimum wage in Ontario, something basically every news organization has failed to pick up on. I’d have missed it too, except that a chance connection to a recent blog post I’d read sent me down the right rabbit hole. I’ve climbed out on the back of a mound of government statistics and I really want to share what I’ve found.

I

Reading through the coverage of the proposed $15/hour minimum wage, I was reminded that the Ontario minimum wage is currently indexed to inflation. Before #FightFor15 really took off, indexing the minimum wage to inflation was the standard progressive minimum wage platform (as evidenced by Obama calling for it in 2013). Ontario is actually aiming for the best of both worlds; the new $15/hour minimum wage will be indexed to inflation. The hope is that it will continue to have the same purchasing power long into the future.

In Canada, inflation is also called the “consumer price index” or CPI. The CPI is based on a standard basket of goods (i.e. a list that includes such things as “children’s sneakers” and “French fries, curly”), which Statistics Canada assesses the price of every few months. These prices are averaged, weighted, and compared to the previous year’s prices to get a single number. This number is periodically reset to 100 (most recently in 2002). The CPI for 2016 is 128.4; in 2016, it cost $128.40 to buy a basket of goods that cost $100.00 in 2002.

The problem with the CPI is that it’s just an average; when you look at what goes into it category by category, it becomes clear that “inflation” isn’t really a single number.

Here’s the last few years of the CPI, with some of the categories broken out:

Table Source: The Canadian Consumer Price Index Reference Paper > Summary Tables; click the table to view the data in Google Sheets.

Every row in this table that is shaded green has decreased in price since 2002. Rows that are shaded blue have increased in price, but have increased slower than the rate of inflation. Economists would say that they’ve increased in price in nominal (unadjusted for inflation) terms, but they’ve decreased in price in real (adjusted for inflation) terms. Real prices are important, because they show how prices are changing relative to other goods on the market. As the real value of goods and services change, so too does the fraction of each paycheque that people spend on them.

The red, yellow, and orange rows represent categories that have increased in price faster than the general rate of inflation. These categories of goods and services are becoming more expensive in both real and nominal terms.

There’s no other way to look at the CPI that shows variation as large as that between categories. When you break it down by major city, the CPI for 2016 varies from 120.7 (Victoria, BC) to 135.6 (Calgary, AB). When you break it down by province, you see basically the same thing, with the CPI varying from 122.4 in BC to 135.2 in Alberta.

Looking at this chart, you can see that electronics (“Home Entertainment”) have become 45% cheaper in nominal (unadjusted for inflation) terms and a whopping 58% cheaper in real (adjusted for inflation) terms. Basically, electronics have never been less expensive.

On the other hand, you have education, which has become 60.8% more expensive in nominal terms and 25% more expensive in real terms. It costing more and more to get an education, in a way that can’t just be explained by “inflation”.

Three of the four categories with the biggest increases in prices rely on the labour of responsible people. The fourth is tobacco; prices increases there are probably driven by increased taxation and its position is a bit of a red herring. It’s potentially worrying that the categories where things are getting cheaper (e.g. electronics, clothes) are in the industries that are most amenable to automation. This might imply that tasks that cannot be automated are doomed to become increasingly expensive [1].

II

I’m certainly not the first person to make the observation that “inflation” isn’t a single number. Economists have presumably known this forever, related as it is to the important economics concept of “cost disease“. More recently, you can see this point made from two different directions in Scott Alexander’s “Considerations on Cost Disease” (which tries to get to the bottom of the price increases in healthcare and education) and Andrew Potter’s “The age of anti-consumerism has passed” (which looks at the societal changes wrought by many consumer goods becoming much cheaper). As far as I know, no one has yet tied this observation to the discussion surrounding the new Ontario minimum wage.

Like I said above, the new minimum wage will still be indexed to inflation; the “$15/hour” minimum wage won’t stay at $15/hour. If inflation follows current trends (this is a terrible assumption but it’s all I’ve got), it will rise by about 1.5% per year. In 2020 it will be (again, bad extrapolation alert) $15.25 and in 2021 it will be $15.50.

Extrapolating backwards, the current Ontario minimum wage ($11.40/hour) was equivalent to $8.88/hour in 2002 (when the CPI was last reset). If instead of tracking inflation generally, the minimum wage had tracked electronics, it would be $4.84 today. If it tracked education, it would be $14.28. Next year, the minimum wage will be $14/hour (it will take until 2019 for the $15/hour wage to be fully phased in), which will make 2018 the first time that students working minimum wage are getting paychecks that will have increased as much as the cost of education.

This won’t last of course. The divergence in prices shows no signs of decreasing. The CPI will continue to climb upwards at a steady rate (the target is 2%, last year it only rose 1.4%), buoyed up by large increases in education costs (2.8% last year) and held down by steady decreases in the price of electronics (-1.6% last year). Imagine that the $15/hour minimum wage allows a student to pay a year’s tuition with a summer’s worth of work. If current trends continue, in 15 years, it would only cover 75% of tuition. Fifteen years after that it would cover about 60%.

III

There’s a funny thing about these numbers. The stuff that’s getting more expensive more quickly is largely stuff that younger people have to pay for. If you’re 50, have more or less raised your kids, and own a house, then you’re golden even if you’re working a minimum wage job (although by this point, you probably aren’t). Assuming your wage has increased with inflation over your working lifetime, a lot of the things you’re looking to buy (travel, electronics, medical devices) will be getting cheaper relative to what you make. Healthcare service costs (e.g. the cost of seeing a doctor) might be increasing for you in theory, but in practice OHIP has you covered for all your doctor’s visits [2].

It’s younger people who are really shafted. First, they’re more likely to be earning minimum wage, with nearly 60% of minimum wage earners in Canada in the 15 to 24 age bracket. Second, the sorts of things that younger people need or aspire to (education, childcare, home ownership) are big ticket items that are increasing in cost above the rate of inflation. Like with the tuition example above, childcare and home ownership are going to slip out of the grasp of young workers even if you index their wage to inflation.

I happen to like the idea of a $15/hour minimum wage. There’s a lot of disagreement among economists as to whether they’ll be ill effects, but this meta-analysis (complete with funnel plot!) has me more or less convinced that the economy will do just fine [3]. Given that Ontario will still have an economy post wage-hike, I think increasing the minimum wage will be good for workers.

But a minimum wage increase leaves the larger problem of differing rates of inflation unsolved. Even with a minimum wage indexed to inflation, we’re going to have people waking up twenty-five years from now, realizing that their minimum wage job doesn’t pay for university/food/utilities/childcare/transit the same way their parents’ minimum wage job did. This will be a problem.

I’m game to kick the can down the road for a bit if it means we can make the lives of minimum wage workers better right now. But until we’ve solved this problem for good, it will keep coming back [4].

Footnotes:

[1] I’m not sure this is exactly a bad thing, per se. Money is a means of signalling that you’d like your preferences satisfied. It becoming more expensive to pay actual humans to do things could mean that actual humans have so many good options that they’re only going to waste their time satisfying your preferences if you really make it worth their while. Looked at this way, this means we’re steadily freeing ourselves from work.

On the other hand, this seems to apply mainly to responsible/competent/intelligent people and not everyone is responsible/competent/intelligent, so this could also imply that we have a looming crisis, with a huge number of people simply becoming economically unnecessary. This is really bad, because high-quality life should be possible for everyone, not just those who’ve lucked into economically valuable traits and under capitalism it is really hard to have a high-quality life if you aren’t economically valuable. ^

[2] For readers outside of Ontario, OHIP is the Ontario Health Insurance Plan. It covers all hospital and clinic care for all legal residents of Ontario, as well as dental and ophthalmological care for minors. OHIP is a non-actuarial insurance program; premiums come from provincial income tax and payroll tax revenues, as well as transfer payments of federal tax revenues. All Ontarians enrolled in OHIP (i.e. basically all of us) have a health card which allows us to access all covered services free of charge (beyond the taxes we’ve already paid) any time we want to. ^

[3] No effect on the unemployment rate does not mean no effect on the employment of individual people. A $15/hour minimum wage will probably tempt some people back into the labour force (I’m thinking here that this will mostly be women), while excluding others whose labour would not be valued that highly (unfortunately this will probably hit people with certain mental illnesses or disabilities the hardest). ^

[4] I think it’s especially pernicious how the difference in inflation rates between types of goods is kind of by default a source of inter-generational strife. First, it makes it more difficult for each succeeding generation to hit the same landmarks that defined adulthood and independence for the previous generation (e.g. home ownership, education, having children), with all the terrible think-pieces, conflict-ridden Thanksgiving dinners, and crushed dreams this implies. Second, it can pit the economic interests of generations against each other; healthcare for older people is subsidized by premiums from younger ones, while the increase in the cost of homes benefit existing players (who skew older) to the determinant of new market entrants (who skew younger). ^