Aspiring author, sometimes blogger. By day, I’m a Software Developer at Alert Labs. By night I write things. Both of these look the exact same to an outside observer, because it’s just me sitting in front of a computer screen, hitting buttons.
I’ve managed to break my arm. Injuries – by necessitating a convalescence – can quickly become an opportunity to reflect. I have a lot to reflect on.
I don’t want to say that (temporarily) losing the use of my arm has given me empathy for those who go about life one handed. That supposed empathy can become a type of mockery. Disability isn’t a costume to try on for a few weeks.
My left hand was never as functional as my right. My left thumb is not truly opposable. Over the years I’ve come up with so many workarounds that I almost forget. It comes up only when I must try new things; when I tie strange knots, or eat with fancy utensils. My thumb has taught me that you cannot compare a cast to a disability. A cast is its own excuse. I won’t be scared with this cast. “Are they watching? Did they notice?” – those have always been my fears as I fumble with a fork at a fancy restaurant or in front of a partner’s parent. Politeness is in many ways predicated on abilities and (for me at least) alienation comes from having different abilities. A broken arm isn’t different. It is so very normal.
My adaptations to my thumb were so gradual that I made them before I even noticed, most of them a very long time ago. A broken arm is sudden. It’s shocking to find that all my old tricks of dealing with my less functional left hand no longer work with it immobilized in a cast.
On a bike ride before the fateful one that brought my present contretemps, I thought about how often I biked and the illusion of fitness it gave many people. I’m small and I gain muscle slowly. I was not as in shape as people assumed I was from the amount I biked. This was fine; the world is not fair and your rewards do not equal your effort – something I benefitted from whenever I aced an exam I barely studied for. In a way, mountain biking – that I started it and that I stuck with it – was more impressive than any A I ever received in school.
Biking giveth and biking taketh away. Mountain biking, with its mad scramble over rocks and roots was giving me something I hadn’t felt in years: the heady glow of rapid accomplishment. I was getting better every week (even if it was more slowly than I liked; more slowly than another might). I was getting stronger and noticing it. It was rarer for my breath to come in great strangled gulps. It was rarer for me to pound on the brakes out of panic.
It is funny how panic works. Panic is what broke my arm. On my first trip on my own, I found myself caught in a thunderstorm, then in a hailstorm. The sky shook, caught in one continuous paroxysm of thunder. Eventually I could stand it no more and possessed of the desperate desire to be out and safe, I set off on my bike. Despite my fogged glasses and the poor visibility. Despite the treacherous roots and the path half turned to a river. I should have walked.
I’ve hated wet roots since I started mountain biking. Hit them at any angle at all and you will slide around. It was a sea of roots at the top of a hill that brought me down. I had no momentum left – no chance to cruise over them. I went down ponderously, right on my left elbow. I was two kilometres from the exit and it was still storming. But at least shock cleared away my panic.
The walk out of there was brutal, but it went oddly quickly. My mind was too occupied to mark the passage of time. I suspected my arm was broken and immobilized it as best I could (by forcing it through the strap of my backpack), then I set out. I got lost twice. I only made it out because a hiker pointed me in the right direction. I only made it home because a kind stranger drove me (it was a 7km ride from my house to the trail). My phone was waterlogged and useless.
I’m prone to an overwhelmed obstinacy and that’s what overcame me at the hospital. No, I did not want a change of clothes. No, I did not need to be dry. I didn’t care about my own comfort. I was simply overwhelmed by the thought of recovery – and the threat it might pose to my deliberately cultivated independence, hard won after a life of medical procedures.
A coworker once described my typing as terrifyingly fast. Now it is one-handed and slow. My thoughts never could keep up with my fingers before. Now they’re constrained to one hand’s plodding pace. The effect is meditative. I hope that this stage will only last for a couple weeks. It will be possible (even required!) for me to take off my next cast. Perhaps then I will be able to type as quickly as I’m used to.
My present inability to type has sapped me of some ambition for this blog. When I started it, about a year ago, I intended to have something to say every month. I surprised even myself with how much I ended up writing. I’d intended mainly to write fiction this year, but the excitement of seeing my thoughts clarified and written down overwhelmed me and I fell in love with blogging. Now I feel like I’ve almost run out of things to say and I’m not quite sure I want to bother to find more.
Maybe I will turn back to fiction. There are concepts and ideas that can be best explored via metaphor. And fiction is its own kind of joy to write. I know enough to know the type of stories I want to put down. I don’t know if anyone else will want to read them, but that’s never stopped me before. An audience is nice, but I do this for me.
The present seems momentous from inside the waves of history. It’s hindsight that allows us to tell the truly significant breakers from those that held only a false fury. I may look back on this in a year and laugh. Or this may be a turning point in my writing, however accidental.
One thing is certain. Expect me to write less until I get this damn cast off.
Content Warning: Extensive discussion of the morality of abortion
Previously, I talked about akrasia as one motive for socially conservative legislation. I think the akrasia model is useful when explaining certain classes of seemingly hypocritical behaviour, but it’s far from the only reason for social conservatives to push for legislation that liberals oppose. At least some legislation comes from a desire to force socially conservative values on everyone .
This is an easy mistake to make. It’s true that limiting abortion also limits women’s financial and sexual freedom. In the vast majority of cases it’s false to claim that this is a plus for the most vociferous opponents of abortion. To their detriment it also isn’t a minus. For many of the staunchest opponents of abortion, the financial or sexual freedom of women plays no role at all in their position. Held against the life of a fetus, these freedoms are (morally) worthless.
People opposed to abortion who also value these things tend to take more moderate positions. For them, their stance on abortion is a trade-off between two valuable things (the life of a fetus and the freedoms of the mother). I know some younger Catholics who fall into this category. Then tend to be of the position that things that reduce abortion (like sexual education, free prenatal care, free daycare, and contraceptive use) are all very good, but they rarely advocate for the complete abolition of abortion (except by restructuring society such that no woman feels the need for one).
Total opposition to abortion is only possible when you hold the benefits of abortion as far less morally relevant than the costs. Total support likewise. If I viewed a fetus to be as morally relevant as a born person, I could not support abortion rights to the extent I do.
The equation views my values as morally meaningless + argues strongly for things that would hurt those values can very easily appear to come out to holds the opposite of my values. But this doesn’t have to be the case! Most anti-abortion advocates aren’t trying to paper over women’s sexual freedoms (with abortion laws). Most abortion supporters aren’t reveling in the termination of pregnancies.
This mistake is especially easy to make because you have every incentive to caricature your political enemies. It’s especially pernicious though, because it makes it so hard to productively talk about any area where you disagree. You and your opponents both think that you are utterly opposed and for either to triumph, the other must lose. It’s only when you see that your values are orthogonal, not opposed that you have any hope for compromise.
I think the benefits of this model lie primarily in sympathy and empathy. Understanding that anti-abortion advocates aren’t literally trying to reduce the financial security and sexual freedom of women doesn’t change the fact that their policies have the practical effects of accomplishing these things. I’m still going to oppose them on the grounds of the consequences of their actions, even if I no longer believe that they’re at all motivated by those specific consequences.
But empathy isn’t useless! There’s something to be said for the productivity of a dialogue when you don’t believe that the other side hates everything about your values! You can try and find common values and make compromises based on those. You can convince people more effectively when you accurately understand their beliefs and values. These can be instrumentally useful when trying to convince people of your point or when advocating for your preferred laws.
Abortion gave me the clearest example of orthogonal values, but it might actually be the hardest place to find any compromise. Strongly held orthogonal values can still lead to gridlock. If not abortion, where is mutually beneficial compromise possible? Where else do liberals argue with only a caricature of their opponents’ values?
 Socially liberal legislation is just objectively right and is based on the values everyone would have if they could choose freely. Only my political enemies try and force anything on anyone. /sarcasm ^
 People who aren’t women can also have abortions and their ability to express their sexualities is also controlled by laws limiting access to abortion. If there exists a less awkward construction than “anyone with a uterus” that I can use instead of “women”, I’d be delighted to find it. ^
If you hang out with people obsessed with self-improvement, one term that you’ll hear a lot is akrasia. A dictionary will tell you that akrasia means “The state of mind in which someone acts against their better judgement through weakness of will.”
Someone who struggles with it will have more visceral stories. “It’s like someone else is controlling me, leaving me powerless to stop watching Netflix” is one I’ve often heard. Or “I know that scrolling through Facebook for five hours is against my goals, but I just can’t help myself”.
I use commitment contracts (I agree to pay a friend a certain amount of money if I don’t do a certain things) or Beeminder (a service that charges me money if I fail to meet my goals) to manage my akrasia. Many of my friends do the same thing. Having to face consequences helps us overcome our akrasia.
If you’ve ever procrastinated, you’ve experienced akrasia. You probably know the listlessness and powerlessness that comes with it, and the frantic burst of energy that you get as the due date for your task nears. Commitment contracts impose an artificial deadline, allowing akratics to access that burst of energy need to break free from an endless cycle of Netflix or Facebook.
Obviously it would be better if akrasia could be wished away. Unfortunately, I haven’t really met anyone who has entirely succeeded in vanquishing it. All we can do is treat the symptoms. For those of us stuck with akrasia, managing it with sticks (and perhaps the occasional carrot) allows us to accomplish our goals. Time your sticks right and you rarely get the listlessness or the shame that can go with it.
Recently, I’ve started viewing social conservatives who push for tough morality laws and then personally fall short of them as more than risibly hypocritical. I’ve begun to think that they’re deeply akratic individuals who think that strong public morality is their only hope for living up to their own standards.
This model has fundamentally changed the way I look at the world. When I read John Scalzi’s rant about covenant marriage while researching for this post –
As a concept, it’s pretty damn insulting. “Covenant Marriage” implicitly suggests that people won’t stay married unless they subject themselves to onerous governmental restrictions on their personal freedoms; basically, it’s the state telling you that it expects you to get a divorce at some point, unless it makes it too annoying for you to get a divorce to make it worth your while. The State of Arkansas is banking on sloth, apathy and state bureaucracy to keep a bunch of bad marriages together, as if bad marriages are really better than divorce.
– All I could think was yes! Yes, that is exactly what some of the people pushing covenant marriage believe and want. They believe that a bad (or at least difficult) marriage is better than a divorce. But they don’t trust themselves to stay in a difficult marriage, so they want to strongly bind their future self to the decisions and values of their current self.
Most of the akratics I know have been unable to overcome their akrasia via willpower. Only the consequences they’ve set up are effective. By the same token, social conservatives who frequently let themselves down (like serial adulterer Newt Gingrich, or any of these nineteen others) might be trying to overcome their failings by increasing the consequences. For everyone.
On one hand, this leads to onerous restrictions on anyone who doesn’t share their views. On the other hand, there aren’t many levers left to accomplish this sort of commitment device except through legislation that affects everyone. Look at marriage; in America, no-fault divorces are allowed in every state. You can enter a so-called “covenant marriage“, with more onerous exit requirements, but these are only offered in a few states and can be avoided by divorcing in a state with different marriage laws. Adultery laws are all but dead.
Even enforcing fidelity through prenups is difficult. Such clauses have been ruled unenforceable in California. While some other states might decide to enforce them, you’re still left with the problem of actually proving infidelity actually occurred.
This isn’t to say that I have a problem with no-fault divorces. If I didn’t support them for helping people leave terrible marriages, I’d support them for ending an embarrassing daily spectacle of perjury. But it is amazing that many governments won’t let consenting adults make more stringent marriage contracts if that’s what they choose. America let people get underwater mortgages that could never be repaid. But it won’t let a pair of adults set harsh penalties for cheating?
Faced with such a dearth of commitment options, what’s an akratic to do? Fail? If you’re religious, this is opening you up to the possibility of eternal damnation. That’s clearly not an option. Fighting for regressive “family values” laws becomes a survival mechanism for anyone caught between their conservative morality and their own predilections.
For some people, societal punishments are the only thing that will work. In a push to liberalize everything about society, liberals really have backed some people into a corner. It wouldn’t be that hard to let them out. Make cheating clauses in prenups enforceable and allow them to include punitive damages. Allow couples to set arduous conditions on their own divorce. Listen to what people want and see if there are ways that we can give it to them and only them.
I can see the obvious objections to this plan. It might sweep up young romantics, still indoctrinated by their parents and ruin their lives. Some (from the liberal point of view) bad contracts could become so common that everyone faces strong social pressure to make them. But these are both issues that can be addressed through legislature, perhaps by requiring a judge to certify adequate maturity and understanding of the contract (to address the first concern) and forbidding preferential treatment from institutions or businesses based on marriage (or other salient contract) type (to address the second).
There are some things that liberals and social conservatives will probably never be able to compromise on. Gay marriage, trans rights, and abortion… these should be our non-negotiable demands. What makes these our red lines is the realization that we must allow individuals to make their own choices (and not deny them any benefits that people who make other choices receive). The cornerstones of social liberalism are a celebration of authenticity and an openness to the freely made personal choices of others, even when we disagree with them. Even when we think they’re nonsensical. Even when we think they’ll bring only grief.
It’s the authoritarian who seeks to make their choices the choices of everyone. Too often that authoritarian is the social conservative. Allowing them to sanction themselves won’t end that. Too many are motivated by self-righteousness or a belief that what works for them must necessarily work for everyone else. We can’t fix that. But we sure as hell can be better than it.
Part of this is materials and labour. The city will probably go for something a bit more permeant than wood – probably concrete or metal – and will probably have higher labour costs (the mechanic hired a random guy off the street to help out, which is probably against city procurement policy). But a decent part (perhaps even the majority) of the increased costs will be driven by regulation.
First there’s the obvious compliance activities: site assessment, community consultation, engineering approval, insurance approval. Each of these will take the highly expensive time of highly skilled professionals. There’s also the less obvious (but still expensive and onerous) hoops to jump through. If the city doesn’t have a public works crew who can install the stairs, they’ll have to find a contractor. The search for a contractor would probably be governed by a host of conflict of interest and due diligence regulations; these are the sorts of thing a well-paid city worker would need to sink a significant amount of time into managing. Based on the salary information I could find, half a week of a city bureaucrat’s time already puts us over the $550 price tag.
And when the person in charge of compliance is highly skilled, the loss is worse than simple monetary terms might imply. Not only are we paying someone to waste her time, we are also paying the opportunity cost of her wasting her time. Whenever some bright young lawyer or planner is stuck reading regulatory tomes instead of creating something, we are deprived of the benefits of what they could have created.
When it comes to the stairs, regulations don’t stop with our hypothetical city worker. The construction firm they hire is also governed by regulations. They have to track how much everyone works, make sure the appropriate taxes go to the appropriate parties, ensure compliance with workplace health and safety standards and probably take care of a dozen minor annoyances that I don’t know about. When you aren’t the person doing these things, they just blend into the background and you forget that someone has to spend a decent part of their time filling out incredibly boring government forms – forms that demand accuracy under pain of perjury.
Hell, the very act of soliciting bids can inflate the cost, because each bid will require a bunch of supporting paperwork (you can’t submit these things on a sticky note). As is becoming the common refrain, this takes time, which costs money. You better bet that whichever firm eventually gets hired will roll the cost of all its failed past bids (either directly or indirectly) into the cost the city ends up paying.
It’s not just government regulations that drive up the price of stairs either. If the city has liability insurance, it will have to comply with a bunch of rules given to it by its insurer (or face higher premiums). If it chooses to self-insure, the city actuaries will come up with all sorts of internal policies designed to lessen the city’s chance of liability – or at least lessen the necessary payout when the city is inevitably sued by some drunk asshole who forgets how to do stairs and breaks a bone.
With all of this regulation (none of which seems unreasonable when taken in isolation!) you can see how the city was expecting to shell out $65,000 (at a minimum) for a simple set of stairs. That they managed to get the cost down to $10,000 in this case (to avoid the negative media attention of over-estimating the cost of stairs more than one hundred times over?) is probably indicative of city workers doing unpaid overtime, or other clever cost hiding measures .
The point here is that regulation is expensive. It’s expensive everywhere it exists. The United States has over 1,000,000 pages of federal regulation. Canada makes its federal regulation available as a compressed XML dump(!) with a current uncompressed size of 559MB. Considering XML overhead, the sum total of Canadian federal regulation is probably approximately equivalent to that of the United States.
This isn’t it for either country; after federal regulation, there’s provincial/state and local regulations. Then there are the interactions between all three, with things becoming even worse when you want to do anything between different jurisdictions within a country or (and it’s a miracle this can even happen at all) between countries.
People who can hold a significant subset of these regulations in their head and successfully navigate them (without going mad from boredom) are a limited resource. Worse, they’re a limited resource who can be useful in a variety of fields (i.e. there has to be some overlap between the people who’d make good programmers, doctors, or administrators and the people who can parse and memorize reams of regulation). Limited supply and consistent (or increasing) demand drives the excessive cost of buying their time that I mentioned earlier.
This is the part where I’m supposed to talk about how regulation destroys jobs and how we should repeal it all if we care about the economic health of our society. But I’m not going to do that. The idea that regulation kills jobs is based on economic fallacies  and not borne out by evidence (although it is surprisingly poorly studied and new evidence could change my mind here).
As best we can currently tell, regulation doesn’t destroy jobs; it shifts them. In a minimally regulated environment, there will be fewer jobs requiring highly educated compliance wizards and more jobs for everyone else. As the amount of regulation increases, we should see more and more labour shift from productive tasks to compliance tasks. Really regulation is one of the best ways that elites can guarantee jobs for other elites.
Viewed through this lens, regulation is similar to a very regressive tax. It might be buying us social goods that we really want, but it does so in a way that transfers wealth from already disadvantaged workers to already advantaged workers. I think (absent offloading regulatory compliance onto specialized AI expert systems) that this might be an inherent feature of regulation.
When I see progressives talking about regulation, the tone is often that companies should whine about it less. I think it’s totally true that many companies push back against regulation that is (on the face of it at least) in the public good – and that companies aren’t pushing back primarily out of concern for their workers. However, rejecting the libertarian position doesn’t mean we should automatically support all regulation. After reading this, I hope you look at regulation as a problematically regressive tax that can have certain other benefits.
Corporations have no social duty beyond giving returns to their shareholders. It’s only through regulation that we can channel them away from anti-social behaviour . Individuals are a bit better, motivated as they are by several things beyond money, but regulation is still sometimes needed to help us avoid the tragedy of the commons.
Regulation isn’t just the purview of the government. If all government regulation disappeared overnight, private regulation – overseen primarily by insurance companies – would take its place. The ubiquity of liability insurance in this litigious age has already turned many insurers into surrogate regulators .
Insurance companies really hate paying out money. They can only make money if they make more in premiums than they pay out for losses. The loss prevention divisions of major insurers work with their clients, making sure they toe the line of the insurer’s policies and raising their premiums when they don’t.
This task has become especially important for the insurers who provide liability insurance to police departments. Many local governments lack the political will to rein in their police force when they engage in misconduct, but insurance companies have no such compunctions. Insurers have written use of force policies, provided expensive training, furnished use of force simulators, and ordered the firing of chiefs and ordinary officers alike.
When insurers make these demands, they expect to be obeyed. Cross an insurer and they’ll withdraw insurance or make the premiums prohibitively high. It isn’t unheard of for police departments to be disbanded if insurers refuse to cover them. Absent liability insurance, a single lawsuit can literally bankrupt a small municipality, a risk most councillors won’t take.
As the Colombia Law School article linked above suggested, it may be possible to significantly affect the behaviour of insurance purchasers with regulation that is targeted at insurers. I also suspect that you can abstract things even further and affect the behaviour of insurers (and therefore their clients) by making arcane changes to how liability works. This has the dubious advantage of making it possible to achieve political goals without obviously working towards them. It seems likely that it’s harder to get together a big protest when the aim you’re protesting against is hidden behind several layers of abstraction .
Regulation isn’t inherently good or bad. It should be able to stand on its own merits and survive a cost-benefit analysis. This will inevitably become a tricky political question, because different people weight costs and benefits differently, but it isn’t an intractable political problem.
(I know that’s what I always say. But it’s a testament to the current political climate that saying “policy should be based on cost-benefit analyses, not ideology” can feel radical .)
I would suggest that if you’re the type of person whose knee-jerk response to regulation is to support it, you should look at how it will displace labour from blue-collar to white-collar industries or raise prices and ponder if this is worth its benefits. If instead you oppose regulation by default, I’d suggest looking at its goals and remembering that the cost of reaching them isn’t infinite. You might be surprised at what a true cost benefit analysis returns.
Also, it probably seems true that some things are a touch over-regulated if $65,000 (or even $10,000) is an unsurprising estimate for a set of stairs.
 Of course, even unpaid overtime has a cost. After a lot of it, you might feel justified to a rather longer paid vacation than you might otherwise take. Not to mention that long hours with inadequate breaks can harm productivity in the long run. ^
 It seems to rest on the belief that regulation makes things more expensive, therefore fewer people buy them, therefore fewer people are needed to produce them. What this simple analysis misses (and what’s pointed out in the Pro Publica article I linked) is that regulatory compliance is a job. Jobs lost directly producing things are more or less offset by jobs dealing with regulations, such that increased regulation has an imperceptible effect on employment. This seems related to the lump of labour fallacy, although I’ve yet to figure out how to clearly articulate the connection. ^
 In Filthy Lucre, Professor Joseph Heath talks about the failures of state-run companies to create “socially inclusive growth”. Basically, managers in companies care far more about their power within the company than the company being successful (the iron law of institutions). If you give them a single goal, you can align their incentives with yours and get good results. Give them two goals and they’ll focus on building up their own little fief within the company and explaining away any failures (from your perspective) as the necessary results of balancing their dual tasks (“yes, I posted no profits, I was trying to be very socially inclusive this quarter”).
Regulation, if set up so that it seriously affects profits (or if set up so that it has high personal consequences for managers) forces the manager to avoid acting in a ruinously anti-social way without leaving them with the sort of divided loyalties that can cause companies to become semi-feudal. ^
 The end game would quite possibly involve supermarkets setting up legally separate (with significant board overlap) charitable organizations that would handle the distribution, and compelling these shells (who would carry almost no cash so as to be judgement proof) to sign contracts indemnifying the source supermarket against all lawsuits. This would require lots and lots of lawyer time and money, which means consumers would see higher food prices. ^
 Actually, higher food prices are pretty much inevitable, because there’s still a bunch of new logistics that have to be worked out as a result of this law. If the logistics turn out to be more expensive then the fines, supermarkets will continue to throw out food (while passing the costs of the fines on to the consumer). If the fines are more expensive, then food will be donated (but price of donating it will still inevitably be passed on to consumers). Any government program that makes food more expensive is incredibly regressive – it’s this realization that underlies the tax-free status of unprepared food in Canada.
Supplemental nutrition programs (AKA “food stamps”) have the benefit of subsidizing food for those who need it from the general tax pool, which can be based on progressive taxation and mainly paid for by the wealthy.
It’s really easy to see a bunch of food sitting around and realize it could be better used. It’s really hard (and expensive) to actually handle the transport and preparation of that food. ^
 Meaning that a government that really wanted to reduce regulation would have to make it rather hard to sue anyone. This seems like an unlikely use of political capital and also probably in conflict with many notions of fundamental justice.
Anyway, you should look at changes to liability the same way you look at regulation. Ultimately, they may amount to the same thing. ^
 This is dubious because it’s inherently anti-democratic (the government is taking actions designed to be opaque to the governed) and also incredibly baroque. I’m not talking about simple changes to liability that will be intuitively understood. I’m talking about provisos written in solid legalese that tweak liability in ways that I wouldn’t expect anyone without a law degree and expertise in liability law to understand. If a government was currently doing this, I would expect that I wouldn’t know it and wouldn’t understand it even if it was pointed out to me. ^
 Note, crucially, that it feels radical, but isn’t. Most people who read my blog already agree with me here, so I’m not actually risking any consequences by being all liberal/centrist/neo-liberal/whatever we’re calling people who don’t toe the party line this week. ^
Foreword: November 8th was one of the worst nights of my life, in a way that might have bled through – just a bit, mind you – into this review. My position will probably mellow as the memories of my fear and disappointment fade.
My latest non-fiction read was Shattered: Inside Hillary Clinton’s Doomed Campaign. In addition to making me consider a career in political consultancy, it gave me a welcome insight into some of the fascinating choices the Clinton campaign made during the election.
I really do believe this book was going to rip on the campaign no matter the outcome. Had Clinton won, the thesis would have been “the race was closer than it needed to be”, not “Clinton’s campaign was brilliant”.
Despite that, I should give the classic disclaimer: I could be wrong about the authors; it’s entirely possible that they’d have extolled the brilliance of Clinton had she won. It’s also true that Clinton almost won and if she had, she would have captured the presidency in an extremely cost-effective way.
But almost only counts in horseshoes and hand grenades and an election is neither. Clinton lost. The 11th hour letter from Comey to congress and Russian hacking may have tipped her over, but ultimately it was the decisions of her campaign that allowed Donald Trump to be within spitting distance of her at all.
Shattered lays a lot of blame for those bad decisions in the lap of Robby Mook, Clinton’s campaign manager. Throughout the book, he’s portrayed as dogmatically obsessed with data, refusing to do anything that doesn’t come up as optimal in his models. It was Mook who refused to do polling (because he thought his analytics provided almost the same information at a fraction of the cost), Mook who refused to condone any attempts at persuading undecided or weak Trump voters to back Clinton, Mook who consistently denied resources to swing state team leads, and Mook who responded to Bill Clinton’s worries about anti-establishment sentiment and white anger with “the data run counter to your anecdotes”.
We now have a bit more context in which to view Mook’s “data” and Bill’s “anecdotes”.
I’m a committed empiricist, but Mook’s “data driven” approach made me repeatedly wince. Anything that couldn’t be measured was discounted as unimportant. Anything that wasn’t optimal was forbidden. And any external validation of models – say via polls – was vetoed because Mook didn’t want to “waste” money validating models he was so confident in.
Mook treated the election as a simple optimization problem – he thought he knew how many votes or how much turnout was associated with every decision he could make, and he assumed that if he fed all this into computers, he’d get the definitive solution to the election.
The problem here is that elections remain unsolved. There doesn’t exist an equation that lets you win an election. There’s too many factors and too many unknowns and you aren’t acting in a vacuum. You have an opponent who is actively countering you. And it should go almost without saying that an optimal solution to an election is only possible if the solution can be kept secret. If your opponent knows your solution, they will find a way to counter it.
Given that elections are intractable as simple optimization problems, a smart campaign will rely on experienced humans to make major decisions. Certainly, these humans should be armed with the best algorithms, projections, data, and cost-benefit analyses that a campaign can supply. But to my (outsider) eyes, it seems absolutely unconscionable to cut out the human element and ignore all of the accumulated experience a campaign brain trust can bring to bear on an election. Clinton didn’t lack for a brain trust, but her brain trust certainly lacked for opportunities to make decisions.
Not all the blame can rest on Mook though. The campaign ultimately comes down to a candidate and quite frankly, there were myriad ways in which Clinton wasn’t that great of a candidate.
First: vision. She didn’t have one. Clinton felt at home in policy, so her campaign had a lot of it. She treated the election like a contest to create policy that would apply to the rational self-interest of a winning coalition of voters. Trump tried to create a story that would appeal to the self-conception of a winning coalition of voters.
I don’t think one is necessarily superior to the other, but I’ve noticed that charismatic and generally liked leaders (Trudeau, Macron, Obama if we count his relatively high approval ratings at the end of his presidency) manage to combine both. Clinton was the “establishment” candidate, the candidate that was supposed to be good at elections. She had every opportunity to learn to use both tools. But she only ever used one, depriving her of a critical weapon against her opponent. In this way, she was a lot like Romney.
(Can you imagine Clinton vs. Romney? That would have been high comedy right there.)
After vision comes baggage. Clinton had a whole mule train of it. Her emails, her speeches, her work for the Clinton foundation – there were plenty of time bombs there. I know the standard progressive talking point is that Clinton had baggage because a woman had to be in politics as long as she did before she would be allowed to run for the presidency. And if her baggage was back room deals with foreign despots or senate subcommittees (the two generally differ only in the lavishness of the receptions they throw, not their moral character) that explanation would be all well and good.
But Clinton used a private email server because she didn’t want the laws on communication disclosures apply to her. She gave paid speeches and hid the transcripts because she felt entitled to hundreds of thousands of dollars and (apparently) thought she could take the money and then remain impartial.
Both of these unforced errors showed poor judgement and entitlement. They weren’t banal expressions of the compromises people need to make to govern. They showed real contempt for the electorate, in that they sought to deny voters a chance to hold Clinton accountable for what she said, both as the nation’s top diplomat and as (perhaps only briefly) its most exorbitantly compensated public speaker.
As she was hiding things, I doubt Clinton explicitly thought “fuck the voters, I don’t care what they think”, it was instead probably “damned if I’m giving everyone more ammunition to get really angry about”. Unfortunately, the second isn’t benign in a democracy, where responsible government first and foremost requires politicians to be responsible to voters for all of their beliefs and actions, even the ones they’d rather keep out of the public eye. To allow any excuse at all to be used to escape from responsible government undermines the very idea of it.
As a personal note, I think it was stupid of Clinton to be so contemptuous because it made her long-term goals more difficult, but I also think her contempt was understandable in light of the fact that she’s waded through more bullshit in the service of her country than any five other politicians combined. Politicians are humans and make mistakes and it’s possible to understand and sympathize with the ways those mistakes come from human frailty while also condemning the near-term effects (lost elections) and long-term effects (decreased trust in democratic institutions) of bad decisions.
The final factor that Clinton deserves blame for is her terrible management style. When talking about management, Peter Thiel opined that only a sociopath would give two people the same job. If this is true – I’m inclined to trust him under the principle that it takes one to know one – Clinton is a sociopath. There was no clear chain of command for the campaign. At every turn, people could see their work undone by well-connected “Clinton World” insiders. The biggest miracle is that the members of the campaign managed to largely keep this on the down-low.
Clinton made much of Obama’s 2008 “drama free” campaign. She wanted her 2016 campaign to run the same way. But instead of adopting the management habits that Obama used to engender loyalty, she decided that the differences lay everywhere but in the candidates; if only she had better, more loyal people working for her, she’d have the drama free campaign she desired. And so, she cleaned house, started fresh, and demanded that there would be no drama. As far as the media was concerned, there wasn’t. But under the surface, things were brutal.
Mook hid information from pretty much everyone because his position felt precarious. No one told Abedin anything because they knew she’d tell it right to Clinton, especially if it wasn’t complementary. Everyone was scared that their colleagues would stab them in the back to prove their loyalty to Clinton. Employees who failed were stripped of almost all responsibilities, but never fired. In 2008, fired employees ‘took the axes they had to grind, sharpened them, and jammed them in Clinton’s back during media interviews’. Clinton learned lessons from that, but I’m not sure if they were the right ones.
I’m not sure how much of this was text and how much was subtext, but I emerged from Shattered feeling that the blame for losing the election can’t stop with the Clinton camp. There’s also Bernie Sanders. I don’t think anyone can blame him for talking about emails and speeches, but I’ve come to believe that the chip on his shoulder about the unfairness of the primary was way out of line; if anyone in the Democratic Party beat Clinton on a sense of entitlement, it was Sanders.
Politics is a team sport. You can’t accomplish anything alone, so you have to rely on other people. Clinton (whatever her flaws) was reliable. She fought and she bled and she suffered for the Democratic Party. Insofar as anyone has ever been owed a nomination, Clinton was owed this one.
Sanders hadn’t even fundraised for the party. And he expected them not to do whatever they could for Clinton? Why? He was an outsider trying to hijack their institution. His complaints would have been fair from a Democrat, but from an independent socialist?
On the Republican side, Trump had the same thing going on (and presumably would have been equally damaging to another nominee had he lost). In both cases, the party owed them nothing. It was childish of Bernie to go on like the party was supposed to be impartial.
(Also, in what meaningful ways vis a vis ability to hire staff and coordinate policy would you expect a Sanders White House to be different from the Trump White House? If you didn’t answer “none”, then you have some serious thinking to do.)
You’d think the effect of all of this would be for me to feel contempt for the Democratic Party in general and Clinton in particular. But aside from Sanders, I came out of it feeling really sorry for everyone involved.
I felt sorry for Debbie Wasserman Schultz. Sanders’ inflammatory rhetoric necessitated throwing her under the bus right before the convention. She didn’t take it gracefully, but then, how could she? She’d flown her whole family from Florida to Philadelphia to see her moment of triumph as Chairwoman of the DNC speaking at the Democratic National Convention and had it all taken away from her so that Sanders’ supporters wouldn’t riot (and apparently it was still a near thing). She spent the better part of the day negotiating her exit with the Clinton campaign’s COO, instead of appearing on the stage like she’d hoped to. The DNC ended up footing the bill for flying her family home.
I felt sorry for Mook. He had a hard job and less power and budget than were necessary to do it well. He trusted his models too much, but this is partially because he was really good with them. Mook’s math made it almost impossible for Sanders to win. Clinton had been terrible at delegate math in 2008. Mook redeemed that. To give just one example of his brilliance, he prioritized media spending in districts with an odd number of delegates, which meant that Clinton won an outside number of delegates from her wins and losses .
I felt sorry for the whole Clinton campaign. Things went so wrong, so often that they had a saying: “we don’t get to have nice things”. Media ignores four Clinton victories to focus on one of Sanders’? “We don’t get to have nice things”. Trump goes off the rails, but it gets overshadowed by the ancient story about emails? “We don’t get to have nice things.”
Several members of the campaign had their emails hacked (probably by the Russians). Instead of reporting on the Russian interference and Russian ties to the Trump campaign, the media talked about those emails over and over again in the last month of the election . That must have been maddening for the candidate and her team.
Even despite that, I felt sorry for the press, who by and large didn’t want Trump to win, but were forced by a string of terrible incentives to consistently cover Clinton in an exceedingly damning way. If you want to see Moloch‘s hand at work, look no further than reporting on the 2016 election.
But most of all, I felt sorry for Clinton. Here was a woman who had spent her whole adult life in politics, largely motivated by a desire to help women and children (causes she’d been largely successful at). As Secretary of State, she flew 956,733 miles (equivalent to two round trips to the moon) and visited 112 countries. She lost two races for the presidency. And it must have been so crushing to have bled and fought and given so much, to think she’d finally succeeded, then to have it all taken away from her by Donald Trump.
Yet, she conceded anyway. She was crushed, but she ensured that America’s legacy of peaceful transfers of power would continue.
November 8th may have been one of the worst nights of my life. But I’m not self-absorbed enough to think my night was even remotely as bad as Clinton’s. Clinton survived the worst the world could do to her and is still breathing and still trying to figure out what to do next. If her campaign gave me little to admire, that makes up a good bit of the gap.
I really recommend Shattered for anyone who wants to see just how off the rails a political campaign can go when it’s buffeted by a combination of candidate ineptitude, unclear chains of command, and persistent attacks from a foreign adversary. It’s a bit repetitious at times, which was sometimes annoying and sometimes helpful (especially when I’d forgotten who was who), but otherwise grippingly and accessibly written. The fascinating subject matter more than makes up for any small burrs in the delivery.
 In a district that has an odd number of delegates, winning by a single vote meant an extra delegate. In a district with 6 delegates, you’d get 3 delegates if you won between 50% and 67% of the votes. In a district with 7, you’d get 4 if you won by even a single vote, and five once you surpassed 71%. If a state has ten counties, four with seven delegates and six with six delegates, you can win the state by four delegates if you squeak to a win in the four districts with seven delegates and win at least 34% of the vote in each of the others. In practice, statewide delegates prevent such wonky scenarios except when the vote is really close, but this sort of math remains vital to winning a close race. ^
 WikiLeaks released the hacked emails a few hundred a day for the last month of the election, starting right after the release of Trump’s “grab her by the pussy” video. This steady drip-drip-drip of bad press was very damaging for the Clinton campaign, especially because many people didn’t differentiate this from the other Clinton-email story.
Where you come down on either of these – or any similar cases where there’s a clear trade-off between maximum access and minimum standards – is probably heavily dependent on your situation. If you’re an American millennial without an employer-provided or parental health care plan, you’re probably quite incensed about the lack of catastrophic health care insurance. For healthy young adults, those plans were an excellent deal.
I like to point out that regulation is a trade-off. Unfortunately (or perhaps fortunately), it’s a trade-off made at the middle. People in the long probability tails – those who are far from the median when it comes to income or risk-tolerance often feel left out by any of the trade-offs made by the majority. This is an almost inevitable side-effect of trade-offs that I rarely see mentioned.
If you have health problems for which Obamacare didn’t mandate coverage, then you might find yourself wishing that the coverage requirements were even more expansive. If you find yourself really hating the illegal AirBnB you’re living in with twelve other programmers, you might wish that the city’s rental enforcement unit was a bit more on their game.
Most articles about people on the extremes leave out the context and leave out the satisfied middle. They don’t say “this is the best trade-off we could get, but it’s still imperfect and it still hurts people”. They say breathlessly “look at this one person hurt by a policy, the policy hurts people and is bad; the people who advocate for it are evil.”
It’s understandable to leave out the middle in the search of a better story. The problem arises when you leave out the middle and then claim all advocates are evil for failing to care about the fringes. Because most of the time, no one is being evil.
The young people skipping out on coverage because it’s not worth it for them aren’t shirking a duty. They’re making the best of their limited finances, ravaged by a tough entry-level job market and expensive university education. The NIMBYs who fight against any change to local building codes that might make housing more affordable are over-leveraged on their houses and might end up underwater if prices fall at all.
Even appeals to principles don’t do much good in situations like this. You can say “no one should live in squalor”, but that might run right up against “everyone should be able to afford a place to live”. It can be that there simply isn’t enough housing supply in desirable cities to comfortably accommodate everyone who wants to live there – and the only way to change that involves higher direct or indirect taxes (here an indirect tax might be something like requiring 15% of new rental stock to be “affordable”, which raises the price of other rental stock to compensate), taxes that will exclude yet another group of people.
When it comes to healthcare in America, you can say “young people shouldn’t be priced out of the market”, but this really does compete with “old people shouldn’t be priced out of the market” or “pre-existing conditions shouldn’t be grounds for coverage to be denied”.
The non-American way of doing healthcare comes with its own country specific trade-offs. In Germany, if you switch from the public plan to a private plan it is very hard to get back on the public plan. This prevents people from gaming the system – holding cheaper private insurance while they’re young, healthy and earning money, then trying to switch back during their retirement, but it also can leave people out in the cold with no insurance.
In Canada, each province has a single, government-run insurance provider that charges non-actuarial premiums (premiums based on how much you make, not how likely you are to use healthcare services). This guarantees universal coverage, but also results in some services (especially those without empirical backing, or where the cost-benefit is too low) remaining uncovered. Canada also prohibits mixing of public and private funds, making private healthcare much more expensive.
Canadians aren’t spared hard choices, we just have to make different trade-offs than Americans. Here we must pick (and did pick) between “the government shouldn’t decide who lives and who dies” and “care should be universal”. This choice was no less wrenching then any of those faced by Obamacare’s drafters.
Municipalities face similar challenges around housing policy. San Francisco is trying to retain the character of the city and protect existing residents with rent control and strict zoning regulations. The Region of Waterloo, where I live, has gone the other way. Despite a much lower population and much less density, it has almost as much construction as San Francisco (16 cranes for Waterloo vs. 22 for SF).
This comes at a cost. Waterloo mandated that houses converted into rental properties cannot hold more than three unrelated tenants per unit, thereby producing guaranteed renters for all the new construction (and alleviating concerns about students living in squalid conditions). The region hopes that affordability will come through densification, but this cuts down on the options student renters have (and can make it more expensive for them to rent).
Toronto is going all out building (it has about as many cranes on its skyline as Seattle, Los Angeles, New York, San Francisco, Boston, Chicago, and Phoenix combined), at the cost of displacing residents in rooming houses. There’s the hope that eventually supply will bring down Toronto’s soaring house costs, but it might be that more formal monthly arrangements are out of the reach of current rooming house residents (especially given that rent control rules have resulted in a 35-year drought on new purpose-built rental units).
In all of these cases, it’s possible to carve out a sacred principle and defend it. But you’re going to run into two problems with your advocacy. First, there’s going to be resistance from the middle of society, who have probably settled on the current trade-off because it’s the least offensive to them. Second, you’re going to find people on the other underserved extreme, convinced all the problems they have with the trade-off can be alleviated by the exact opposite of what you’re advocating.
Obamacare looked like it would be impossible to defend without Democrats controlling at least one lever of government. Republicans voted more than 50 times to repeal Obamacare. Now that they control everything, there is serious doubt that they’ll be able to change it at all. Republicans got drunk on the complaints of people on the long tails, the people worst served by Obamacare. They didn’t realize it really was the best compromise that could be obtained under the circumstances, or just how unpopular any attempt to change that compromise would be.
This is going to be another one of those posts where I don’t have a clear prescription for fixing anything (except perhaps axing rent control aka “the best way to destroy a city’s rental stock short of bombing it”). I don’t actually want to convince people – especially people left out of major compromises – not to advocate for something different. It’s only through broad input that we get workable compromises at all. Pluralistic society is built on many legitimate competing interests. People are motivated by different terminal values and different moral foundations.
Somehow, despite it all, we manage to mostly not kill each other. Maybe my prescription is simply that we should keep trying to find workable compromises and keep trying not to kill each other. Perhaps we could stand to put more effort into understanding why people ask for what they do. And we could try and be kind to each other. I feel comfortable recommending that.
The author is one Sir Bernard Williams. According to his Wikipedia, he was a particularly humanistic philosopher in the old Greek mode. He was skeptical of attempts to build an analytical foundation for moral philosophy and of his own prowess in arguments. It seems that he had something pithy or cutting to say about everything, which made him notably cautious of pithy or clever answers. He’s also described as a proto-feminist, although you wouldn’t know it from his writing.
Williams didn’t write his essay out of a rationalist desire to disprove utilitarianism with pure reason (a concept he seemed every bit as sceptical of as Smart was). Instead, Williams wrote this essay because he agrees with Smart that utilitarianism is a “distinctive way of looking at human action and morality”. It’s just that unlike Smart, Williams finds the specific distinctive perspective of utilitarianism often horrible.
Smart anticipated this sort of reaction to his essay. He himself despaired of finding a single ethical system that could please anyone, or even please a single person in all their varied moods.
One of the very first things I noticed in Williams’ essay was the challenge of attacking utilitarianism on its own terms. To convince a principled utilitarian that utilitarianism is a poor choice of ethical system, it is almost always necessary to appeal to the consequences of utilitarianism. This forces any critic to frame their arguments a certain way, a way which might feel unnatural. Or repugnant.
Williams begins his essay proper with (appropriately) a discussion of consequences. He points out that it is difficult to hold actions as valuable purely by their consequences because this forces us to draw arbitrary lines in time and declare the state of the world at that time the “consequences”. After all, consequences continue to unfold forever (or at least, until the heat death of the universe). To have anything to talk about at all Williams decides that it is not quite consequences that consequentialism cares about, but states of affairs.
Utilitarianism is the form of consequentialism that has happiness as its sole important value and seeks to bring about the state of affairs with the most happiness. I like how Williams undid the begging the question that utilitarianism commonly does. He essentially asks ‘why should happiness be the only thing we treat as intrinsically valuable?’ Williams mercifully didn’t drive this home, but I was still left with uncomfortable questions for myself.
Instead he moves on to his first deep observation. You see, if consequentialism was just about valuing certain states of affairs more than others, you could call deontology a form of consequentialism that held that duty was the only intrinsically valuable thing. But that can’t be right, because deontology is clearly different from consequentialism. The distinction, that Williams suggests is that consequentialists discount the possibility of actions holding any inherent moral weight. For a consequentialist, an action is right because it brings about a better state of affairs. For non-consequentialists, a state of affairs can be better – even if it contains less total happiness or integrity or whatever they care about than a counterfactual state of affairs given a different action – because the right action was taken.
A deontologist would say that it is right for someone to do their duty in a way that ends up publically and spectacularly tragic, such that it turns a thousand people off of doing their own duty. A consequentialist who viewed duty as important for the general moral health of society – who, in Smart’s terminology, viewed acting from duty as good – would disagree.
Williams points out that this very emphasis on comparing states of affairs (so natural to me) is particularly consequentialist and utilitarian. That is to say, it is not particularly meaningful for a deontologist or a virtue ethicist to compare states of affairs. Deontologists have no duty to maximize the doing of duty; if you ask a deontologist to choose between a state of affairs that has one hundred people doing their duty and another that has a thousand, it’s not clear that either state is preferable from their point of view. Sure, deontologists think people should do their duty. But duty embodied in actions is the point, not some cosmic tally of duty.
Put as a moral statement, non-consequentialists lack any obligation to bring about more of what they see as morally desirable. A consequentialist may feel both fondness for and a moral imperative to bring about a universe where more people are happy. Non- consequentialists only have the fondness.
One deontologist of my acquaintance said that trying to maximize utility felt pointless – they viewed it as morally important as having a high score on a Tetris game. We ended up starting at each other in blank incomprehension.
In Williams’ view, rejection of consequentialism doesn’t necessarily lead to deontology, though. He sums it up simply as: “all that is involved… in the denial of consequentialism, is that with respect to some type of action, there are some situations in which that would be the right thing to do, even though the state of affairs produced by one’s doing that would be worse than some other state of affairs accessible to one.”
A deontologist will claim right actions must be taken no matter the consequences, but to be non-consequentalist, an ethical system merely has to claim that some actions are right despite a variety of more or less bad consequences that might arise from them.
Or, as I wrote angrily in the margins: “ok, so not necessarily deontology, justaccepting sub-maximal global utility“. It is hard to explain to a non-utilitarian just how much this bugs me, but I’m not going to go all rationalist and claim that I have a good reason for this belief.
Williams then turns his attention to the ways in which he thinks utilitarianism’s insistency on quantifying and comparing everything is terrible. Williams believes that by refusing to categorically rule any action out (or worse, specifically trying to come up with situations in which we might do horrific things), utilitarianism encourages people – even non-utilitarians who bump into utilitarian thought experiments – to think of things in utilitarian (that is to say, explicitly comparative) terms. It seems like Williams would prefer there to be actions that are clearly ruled out, not just less likely to be justified.
I get the impression of a man almost tearing out his hair because for him, there exist actions that are wrong under all circumstances and here we are, talking about circumstances in which we’d do them. There’s a kernel of truth here too. I think there can be a sort of bravado in accepting utilitarian conclusions. Yeah, I’m tough enough that I’d kill one to save one thousand? You wouldn’t? I guess you’re just soft and old-fashioned. For someone who cares as much about virtue as I think Williams does, this must be abhorrent.
I loved how Williams summed this up.
The demand… to think the unthinkable is not an unquestionable demand of rationality, set against a cowardly or inert refusal to follow out one’s moral thoughts. Rationality he sees as a demand not merely on him, but on the situations in and about which he has to think; unless the environment reveals minimum sanity, it is insanity to carry the decorum of sanity into it.
For all that I enjoyed the phrasing, I don’t see how this changes anything; there is nothing at all sane about the current world. A life is worth something like $7 million to $9 million and yet can be saved for less than $5000. This planet contains some of the most wrenching poverty and lavish luxury imaginable, often in the very same cities. Where is the sanity? If Williams thinks sane situations are a reasonable precondition to sane action, then he should see no one on earth with a duty to act sanely.
The next topic Williams covers is responsibility. He starts by with a discussion of agent interchangeability in utilitarianism. Williams believes that utilitarianism merely requires someone do the right thing. This implies that to the utilitarian, there is no meaningful difference between me doing the utilitarian right action and you doing it, unless something about me doing it instead of you leads to a different outcome.
This utter lack of concern for who does what, as long as the right thing gets done doesn’t actually seem to absolve utilitarians of responsibility. Instead, it tends to increase it. Williams says that unlike adherents of many ethical systems, utilitarians have negative responsibilities; they are just as much responsible for the things they don’t do as they are for the things they do. If someone has to and no one else will, then you have to.
This doesn’t strike me as that unique to utilitarianism. I was raised Catholic and can attest that Catholics (who are supposed to follow a form of virtue ethics) have a notion of negative responsibility too. Every mass, as Catholics ask forgiveness before receiving the Eucharist they ask God for forgiveness for their sins, in thoughts and words, in what they have done and in what they have failed to do.
Leaving aside whether the concept of negative responsibility is uniquely utilitarian or not, Williams does see problems with it. Negative responsibility makes so much of what we do dependent on the people around us. You may wish to spend your time quietly growing vegetables, but be unable to do so because you have a particular skill – perhaps even one that you don’t really enjoy doing – that the world desperately needs. Or you may wish never to take a life, yet be confronted with a run-away trolley that can only be diverted from hitting five people by pulling the lever that makes it hit one.
This didn’t really make sense to me as a criticism until I learned that Williams deeply cares about people living authentic lives. In both the cases above, authenticity played no role in the utilitarian calculus. You must do things, perhaps things you find abhorrent, because other people have set up the world such that terrible outcomes would happen if you didn’t.
It seems that Williams might consider it a tragedy for someone feel compelled by their ethical system to do something that is inauthentic. I imagine he views this as about as much of a crying waste of human potential as I view the yearly deaths of 429,000 people due to malaria. For all my personal sympathy for him I am less than sympathetic to a view that gives these the same weight (or treats inauthenticity as the greater tragedy).
Radical authenticity requires us to ignore society. Yes, utilitarianism plops us in the middle of a web of dependencies and a buffeting sea of choices that were not ours, while demanding we make the best out of it all. But our moral philosophies surely are among the things that push us towards an authentic life. Would Williams view it as any worse that someone was pulled from her authentic way of living because she would starve otherwise?
To me, there is a certain authenticity in following your ethical system wherever it leads. I find this authenticity beautiful, but not worthy of moral consideration, except insofar as it affects happiness. Williams finds this authenticity deeply important. But by rejecting consequentialism, he has no real way to argue for more of the qualities he desires, except perhaps as a matter of aesthetics.
It seems incredibly counter-productive to me to say to people – people in the midst of a society that relentlessly pulls them away from authenticity with impersonal market forces – that they should turn away from the one ethical system that seems to have as the desired outcome a happier system. A Kantian has her duty to duty, but as long as she does that, she cares not for the system. A virtue ethicist wishes to be virtuous and authentic, but outside of her little bubble of virtue, the terrors go on unabated. It’s only the utilitarian who can holds a better society as an end into itself.
Maybe this is just me failing to grasp non-utilitarian epistemologies. It baffles me to hear “this thing is good and morally important, but it’s not like we think it’s morally important for there to be more of it; that goes too far!”. Is this a strawman? If someone could explain what Williams is getting at here in terms I can understand, I’d be most grateful.
I do think Williams misses one key thing when discussing the utilitarian response to negative responsibility. Actions should be assessed on the margin, not in isolation. That is to say, the marginal effect of someone becoming a doctor, or undertaking some other career generally considered benevolent is quite low if there are others also willing to do the job. A doctor might personally save hundreds, or even thousands of lives over her career, but her marginal impact will be saving something like 25 lives.
The reasons for this are manifold. First, when there are few doctors, they tend to concentrate on the most immediately life-threatening problems. As you add more and more doctors, they can help, but after a certain point, the supply of doctors will outstrip the demand for urgent life-saving attention. They can certainly help with other tasks, but they will each save fewer lives than the first few doctors.
Second, there is a somewhat fixed supply of doctors. Despite many, many people wishing they could be doctors, only so many can get spots in medical school. Even assuming that medical school admissions departments are perfectly competent at assessing future skill at being a doctor (and no one really believes they are), your decision to attend medical school (and your successful admission) doesn’t result in one extra doctor. It simply means that you were slightly better than the next best person (who would have been admitted if you weren’t).
Finally, when you become a doctor you don’t replace one of the worst already practising doctors. Instead, you replace a retiring doctor who is (for statistical purposes) about average for her cohort.
All of this is to say that utilitarians should judge actions on the margin, not in absolute terms. It isn’t that bad (from a utilitarian perspective) not devote all your attentions to the most effective direct work, because unless a certain project is very constrained by the number of people working on it, you shouldn’t expect to make much marginal difference. On the other hand, earning a lot of money and giving it to highly effective charities (or even a more modest commitment, like donating 10% of your income) is likely to do a huge amount of good, because most people don’t do this, so you’re replacing a person at a high paying job who was doing (from a utilitarian perspective) very little good.
Williams either isn’t familiar with this concept, or omitted it in the interest of time or space.
Williams next topic is remoter effects. A remoter effect is any effect that your actions have on the decision making of other people. For example, if you’re a politician and you lie horribly, are caught, and get re-elected by a large margin, a possible remoter effect is other politicians lying more often. With the concept of remoter effects, Williams is pointing at what I call second order utilitarianism.
Williams makes a valid point that many of the justifications from remoter effects that utilitarians make are very weak. For example, despite what some utilitarians claim, telling a white lie (or even telling any lie that is unpublicized) doesn’t meaningfully reduce the propensity of everyone in the world to tell the truth.
Williams thinks that many utilitarians get away with claiming remoter effects as justification because they tend to be used as way to make utilitarianism give the common, respectable answers to ethical dilemmas. He thinks people would be much more skeptical of remoter effects if they were ever used to argue for positions that are uncommonly held.
This point about remoter effects was, I think, a necessary precursor to Williams’ next thought experiment. He asks us to imagine a society with two groups, A and B. There are many more members of A than B. Furthermore, members of A are disgusted by the presence (or even the thought of the presence) of members of group B. In this scenario, there has to exist some level of disgust and some ratio between A and B that makes the clear utilitarian best option relocating all members of group B to a different country.
With Williams’ recent reminder that most remoter effects are weaker than we like to think still ringing in my ears, I felt fairly trapped by this dilemma. There are clear remoter effects here: you may lose the ability to advocate against this sort of ethnic cleansing in other countries. Successful, minimally condemned ethnic cleansing could even encourage copy-cats. In the real world, these are might both be valid rejoinders, but for the purposes of this thought experiment, it’s clear these could be nullified (e.g. if we assume few other societies like this one and a large direct utility gain).
The only way out that Williams sees fit to offer us is an obvious trap. What if we claimed that the feelings of group A were entirely irrational and that they should just learn to live with them? Then we wouldn’t be stuck advocating for what is essentially ethnic cleansing. But humans are not rational actors. If we were to ignore all such irrational feelings, then utilitarianism would no longer be a pragmatic ethical system that interacts with the world as it is. Instead, it would involve us interacting with the world as we wish it to be.
Furthermore, it is always a dangerous game to discount other people’s feelings as irrational. The problem with the word irrational (in the vernacular, not utilitarian sense) is that no one really agrees on what is irrational. I have an intuitive sense of what is obviously irrational. But so, alas, do you. These senses may align in some regions (e.g. we both may view it as irrational to be angry because of a belief that the government is controlled by alien lizard-people), but not necessarily in others. For example, you may view my atheism as deeply irrational. I obviously do not.
Williams continues this critique to point out that much of the discomfort that comes from considering – or actually doing – things the utilitarian way comes from our moral intuitions. While Smart and I are content to discount these feelings, Williams is horrified at the thought. To view discomfort from moral intuitions as something outside yourself, as an unpleasant and irrational emotion to be avoided, is – to Williams – akin to losing all sense of moral identity.
This strikes me as more of a problem for rationalist philosophers. If you believe that morality can be rationally determined via the correct application of pure reason, then moral intuitions must be key to that task. From a materialist point of view though, moral intuitions are evolutionary baggage, not signifiers of something deeper.
Still, Williams made me realize that this left me vulnerable to the question “what is the purpose of having morality at all if you discount the feelings that engender morality in most people?”, a question to which I’m at a loss to answer well. All I can say (tautologically) is “it would be bad if there was no morality”; I like morality and want it to keep existing, but I can’t ground it in pure reason or empiricism; no stone tablets have come from the world. Religions are replete with stone tablets and justifications for morality, but they come with metaphysical baggage that I don’t particularly want to carry. Besides, if there was a hell, utilitarians would have to destroy it.
I honestly feel like a lot of my disagreement with Williams comes from our differing positions on the intuitive/systematizing axis. Williams has an intuitive, fluid, and difficult to articulate sense of ethics that isn’t necessarily transferable or even explainable. I have a system that seems workable and like it will lead to better outcomes. But it’s a system and it does have weird, unintuitive corner cases.
Williams talks about how integrity is a key moral stance (I think motivated by his insistence on authenticity). I agree with him as to the instrumental utility of integrity (people won’t want to work with you or help you if you’re an ass or unreliable). But I can’t ascribe integrity some sort of quasi-metaphysical importance or treat it as a terminal value in itself.
In the section on integrity, Williams comes back to negative responsibility. I do really respect Williams’ ability to pepper his work with interesting philosophical observations. When talking about negative responsibility, he mentions that most moral systems acknowledge some difference between allowing an action to happen and causing it yourself.
Williams believes the moral difference between action and inaction is conceptually important, “but it is unclear, both in itself and in its moral applications, and the unclarities are of a kind which precisely cause it to give way when, in very difficult cases, weight has to be put on it”. I am jealous three times over at this line, first at the crystal-clear metaphor, second at the broadly applicable thought underlying the metaphor, and third at the precision of language with which Williams pulls it off.
(I found Williams a less consistent writer than Smart. Smart wrote his entire essay in a tone of affable explanation and managed to inject a shocking amount of simplicity into a complicated subject. Williams frequently confused me – which I feel comfortable blaming at least in part on our vastly different axioms – but he was capable of shockingly resonant turns of phrase.)
I doubt Williams would be comfortable to come down either way on inaction’s equivalence to action. To the great humanist, it must ultimately (I assume) come down to the individual humans and what they authentically believed. Williams here is scoffing at the very idea of trying to systematize this most slippery of distinctions.
For utilitarians, the absence or presence of a distinction is key to figuring out what they must do. Utilitarianism can imply “a boundless obligation… to improve the world”. How a utilitarian undertakes this general project (of utility maximization) will be a function of how she can affect the world, but it cannot, to Williams, ever be the only project anyone undertakes. If it were the only project, underlain by no other projects, then it will, in Williams words, be “vacuous”.
The utilitarian can argue that her general project will not be the only project, because most people aren’t utilitarian and therefore have their own projects going on. Of course, this only gets us so far. Does this imply that the utilitarian should not seek to convince too many others of her philosophy?
What does it even mean for the general utilitarian project to be vacuous? As best I can tell, what Williams means is that if everyone were utilitarian, we’d all care about maximally increasing the utility of the world, but either be clueless where to start or else constantly tripping over each other (imagine, if you can, millions of people going to sub-Saharan Africa to distribute bed nets, all at the same time). The first order projects that Williams believes must underlay a more general project are things like spending times with friends, or making your family happy. Williams also believes that it might be very difficult for anyone to be happy without some of these more personal projects
I would suggest that what each utilitarian should do is what they are best suited for. But I’m not sure if this is coherent without some coordinating body (i.e. a god) ensuring that people are well distributed for all of the projects that need doing. I can also suppose that most people can’t go that far on willpower. That is to say, there are few people who are actually psychologically capable of working to improve the world in a way they don’t enjoy. I’m not sure I have the best answer here, but my current internal justification leans much more on the second answer than the first.
Which is another way of saying that I agree with Williams; I think utilitarianism would be self-defeating if it suggested that the only project anyone should undertake is improving the world generally. I think a salient difference between us is that he seems to think utilitarianism might imply that people should only work on improving the world generally, whereas I do not.
This discussion of projects leads to Williams talking about the hedonic paradox (the observation that you cannot become happy by seeking out pleasures), although Williams doesn’t reference it by name. Here Williams comes dangerously close to a very toxic interpretation of the hedonic paradox.
Williams believes that happiness comes from a variety of projects, not all of which are undertaken for the good of others or even because they’re particularly fun. He points out that few of these projects, if any, are the direct pursuit of happiness and that happiness seems to involve something beyond seeking it. This is all conceptually well and good, but I think it makes happiness seem too mysterious.
I wasted years of my life believing that the hedonic paradox meant that I couldn’t find happiness directly. I thought if I did the things I was supposed to do, even if they made me miserable, I’d find happiness eventually. Whenever I thought of rearranging my life to put my happiness first, I was reminded of the hedonic paradox and desisted. That was all bullshit. You can figure out what activities make you happy and do more of those and be happier.
There is a wide gulf between the hedonic paradox as originally framed (which is purely an observation about pleasures of the flesh) and the hedonic paradox as sometimes used by philosophers (which treats happiness as inherently fleeting and mysterious). I’ve seen plenty of evidence for the first, but absolutely none for the second. With his critique here, I think Williams is arguably shading into the second definition.
This has important implications for the utilitarian. We can agree that for many people, the way to most increase their happiness isn’t to get them blissed out on food, sex, and drugs, without this implying that we will have no opportunities to improve the general happiness. First, we can increase happiness by attacking the sources of misery. Second, we can set up robust institutions that are conducive to happiness. A utilitarian urban planner would perhaps give just as much thought to ensuring there are places where communities can meet and form as she would to ensuring that no one would be forced to live in squalor.
Here’s where Williams gets twisty though. He wanted us to come to the conclusion that a variety of personal projects are necessary for happiness so that he could remind us that utilitarianism’s concept of negative responsibility puts great pressure on an agent not to have her own personal projects beyond the maximization of global happiness. The argument here seems to be (not for the first time) that utilitarianism is self-defeating because it will make everyone miserable if everyone is a utilitarian.
Smart tried to short-circuit arguments like this by pointing out that he wasn’t attempting to “prove” anything about the superiority of utilitarianism, simply presenting it as an ethical system that might be more attractive if it was better understood. Faced with Williams point here, I believe that Smart would say that he doesn’t expect everyone to become utilitarian and that those who do become utilitarian (and stay utilitarian) are those most likely to have important personal projects that are generally beneficent.
I have the pleasure of reading the blogs and Facebook posts of many prominent (for certain unusual values of prominent) utilitarians. They all seem to be enjoying what they do. These are people who enjoy research, or organizing, or presenting, or thought experiments and have found ways to put these vocations to use in the general utilitarian project. Or people who find that they get along well with utilitarians and therefore steer their career to be surrounded by them. This is basically finding ikigai within the context of utilitarian responsibilities.
Saying that utilitarianism will never be popular outside of those suited for it means accepting we don’t have a universal ethical solution. This is, I think, very pragmatic. It also doesn’t rule out utilitarians looking for ways we can encourage people to be more utilitarian. To slightly modify a phrase that utilitarian animal rights activists use: the best utilitarianism is the type you can stick with; it’s better to be utilitarian 95% of the time then it is to be utilitarian 100% of the time – until you get burnt out and give it up forever.
I would also like to add a criticism of Williams’ complaint that utilitarian actions are overly determined by the actions of others. Namely, the status quo certainly isn’t perfect. If we are to reject action because it is not on the projects we would most like to be doing, then we are tacitly endorsing the status quo. Moral decisions cannot be made in a vacuum and the terrain in which we must make moral decisions today is one marked by horrendous suffering, inequality, and unfairness.
The next two sections of Williams’ essay were the most difficult to parse, but also the most rewarding. They deal with the interplay between calculating utilities and utilitarianism and question the extent to which utilitarianism is practical outside of appealing to the idea of total utility. That is to say, they ask if the unique utilitarian ethical frame can, under practical conditions have practical effects.
To get to the meat of Williams points, I had to wade through what at times felt like word games. All of the things he builds up to throughout these lengthy sections begin with a premise made up of two points that Williams thinks are implied by Smart’s essay.
All utilities should be assessed in terms of acts. If we’re talking about rules, governments, or dispositions, their utility stems from the acts they either engender or prevent.
To say that a rule (as an example) has any effect at all, we must say that it results in some change in acts. In Williams’ words: “the total utility effect of a rule’s obtaining must be cashable in terms of the effects of acts.
Together, (1) and (2) make up what Williams calls the “act-adequacy” premise. If the premise is true, there must be no surplus source of utility outside of acts and, as Smart said, rule utilitarianism should (if it is truly concerned with optimific outcomes) collapse to act utilitarianism. This is all well and good when comparing systems as tools of total assessment (e.g. when we take the universe wide view that I criticized Smart for hiding in), but Williams is first interested in how this causes rule and act utilitarianism to relate with actions
If you asked an act-utilitarian and a rule utilitarian “what makes that action right”, they would give different answers. The act utilitarian would say that it is right if it maximizes utility, but the rule utilitarian would say it is right if it is in accordance with rules that tend to maximize utility. Interestingly, if the act-adequacy premise is true, then both act and rule utilitarians would agree as to why certain rules or dispositions are desirable, namely, that actions that results from those rules or dispositions tends to maximize utility.
(Williams also points out that rules, especially formal rules, may derive utility from sources other than just actions following the rule. Other sources of utility include: explaining the rule, thinking about the rule, avoiding the rule, or even breaking the rule.)
But what to do we do when actually faced with the actions that follow from a rule or disposition? Smart has already pointed out that we should praise or blame based on the utility of the praise/blame, not on the rightness or wrongness of the action we might be praising.
In Williams’ view, there are two problems with this. First, it is not a very open system. If you knew someone was praising or blaming you out of a desire to manipulate your future actions and not in direct relation to their actual opinion of your past actions, you might be less likely to accept that praise or blame. Therefore, it could very well be necessary for the utilitarian to hide why acts are being called good or bad (and therefore the reasons why they praise or blame).
The second problem is how this suggests utilitarians should stand with themselves. Williams acknowledges that utilitarians in general try not to cry over spilt milk (“[this] carries the characteristically utilitarian thought that anything you might want to cry over is, like milk, replaceable”), but argues that utilitarianism replaces the question of “did I do the right thing?” with “what is the right thing to do?” in a way that may not be conducive to virtuous thought.
(Would a utilitarian Judas have lived to old age contentedly, happy that he had played a role in humankind’s eternal salvation?)
The answer to “what is the right thing to do?” is of course (to the utilitarian) “that which has the best consequences”. Except “what is the right thing to do?” isn’t actually the right question to ask if you’re truly concerned with the best consequences. In that case, the question is “if asking this question is the right thing to do, what actions have the best consequences?”
Remember, Smart tried to claim that utilitarianism was to only be used for deliberative actions. But it is unclear which actions are the right ones to take as deliberative, especially a priori. Sometimes you will waste time deliberating, time that in the optimal case you would have spent on good works. Other times, you will jump into acting and do the wrong thing.
The difference between act (direct) and rule (indirect) utilitarianism therefore comes to a question of motivation vs. justification. Can a direct utilitarian use “the greatest total good” as a motivation if they do not know if even asking the question “what will lead to the greatest total good?” will lead to it? Can it only ever be a justification? The indirect utilitarian can be motivated by following a rule and justify her actions by claiming that generally followed, the rule leads to the greatest good, but it is unclear what recourse (to any direct motivation for a specific action) the direct utilitarian has.
Essentially, adopting act utilitarianism requires you to accept that because you have accepted act utilitarianism you will sometimes do the wrong thing. It might be that you think that you have a fairly good rule of thumb for deliberating, such that this is still the best of your options to take (and that would be my defense), but there is something deeply unsettling and somewhat paradoxical about this consequence.
Williams makes it clear that the bad outcomes here aren’t just loss of an agent’s time. This is similar in principle to how we calculate the total utility of promulgating a rule. We accept that the total effects of the promulgation must include the utility or disutility that stems from avoiding it or breaking it, in addition to the utility or disutility of following. When looking at the costs of deliberation, we should also include the disutility that will sometimes come when we act deliberately in a way that is less optimific than we would have acted had we spontaneously acted in accordance with our disposition or moral intuitions.
This is all in the case where the act-adequacy premise is true. If it isn’t, the situation is more complex. What if some important utility of actions comes from the mood they’re done in, or in them being done spontaneously? Moods may be engineered, but it is exceedingly hard to engineer spontaneity. If the act-adequacy premise is false, then it may not hold that the (utilitarian) best world is one in which right acts are maximized. In the absence of the act-adequacy premise it is possible (although not necessarily likely) that the maximally happy world is one in which few people are motivated by utilitarian concerns.
Even if the act-adequacy premise holds, we may be unable to know if our actions are at all right or wrong (again complicating the question of motivation).
Williams presents a thought experiment to demonstrate this point. Imagine a utilitarian society that noticed its younger members were liable to stray from the path of utilitarianism. This society might set up a Truman Show-esque “reservation” of non-utilitarians, with the worst consequences of their non-utilitarian morality broadcasted for all to see. The youth wouldn’t stray and the utility of the society would be increased (for now, let’s beg the question of utilitarianism as a lived philosophy being optimific).
Here, the actions of the non-utilitarian holdouts would be right; on this both utilitarians (looking from a far enough remove) and the subjects themselves would agree. But this whole thing only works if the viewers think (incorrectly) that the actions they are seeing are wrong.
From the global utilitarian perspective, it might even be wrong for any of the holdouts to become utilitarian (even if utilitarianism was generally the best ethical system). If the number of viewers is large enough and the effect of one fewer irrational holdout is strong enough (this is a thought experiment, so we can fiddle around with the numbers such that this is indeed true), the conversion of a hold-out to utilitarianism would be really bad.
Basically, it seems possible for there to be a large difference between the correct action as chosen by the individual utilitarian with all the knowledge she has and the correct action as chosen from the perspective of an omniscient observer. From the “total assessment” perspective, it is even possible that it would be best that there be no utilitarians.
Williams points out that many of the qualities we value and derive happiness from (stubborn grit, loyalty, bravery, honour) are not well aligned with utilitarianism. When we talked about ethnic cleansing earlier, we acknowledged that utilitarianism cannot distinguish between preferences people have and the preferences people should have; both are equally valid. With all that said, there’s a risk of resolving the tension between non-utilitarian preferences and the joy these preferences can bring people by trying to shape the world not towards maximum happiness, but towards the happiness easiest to measure and most comfortable to utilitarians.
Utilitarianism could also lead to disutility because of the game theoretic consequences. On international projects or projects between large groups of people, sanctioning other actors must always be an option. Without sanctioning, the risk of defection is simply too high in many practical cases. But utilitarians are uniquely compelled to sanction (or else surrender).
If there is another group acting in an uncooperative or anti-utilitarian matter, the utilitarians must apply the least terrible sanction that will still be effective (as the utility of those they’re sanctioning still matters). The other group will of course know this and have every incentive to commit to making any conflict arising from the sanction so terrible as to make any sanctioning wrong from a utilitarian point of view. Utilitarians now must call the bluff (and risk horrible escalating conflict), or else abandon the endeavour.
This is in essence a prisoner’s dilemma. If the non-utilitarians carry on without being sanctioned, or if they change their behaviour in response to sanctions without escalation, everyone will be better off (then in the alternative). But if utilitarians call the bluff and find it was not a bluff, then the results could be catastrophic.
Williams seems to believe that utilitarians will never include an adequate fudge factor for the dangers of mutual defecting. He doesn’t suggest pacifism as an alternative, but he does believe that violent sanctioning should always be used at a threshold far beyond where he assesses the simple utilitarian one to lie.
This position might be more of a historical one, in reaction to the efficiency, order, and domination obsessed Soviet Communism (and its Western fellow travelers), who tended towards utilitarian justifications. All of the utilitarians I know are committed classical liberals (indeed, it sometimes seems to me that only utilitarians are classic liberals these days). It’s unclear if Williams’ criticism can be meaningfully applied to utilitarians who have internalized the severe detriments of escalating violence.
While it seems possible to produce a thought experiment where even such committed second order utilitarians would use the wrong amount of violence or sanction too early, this seems unlikely to come up in a practical context – especially considering that many of the groups most keen on using violence early and often these days aren’t in fact utilitarian. Instead it’s members of both the extreme left and right, who have independently – in an amusing case of horseshoe theory – adopted a morality based around defending their tribe at all costs. This sort of highly local morality is anathema to utilitarians.
Williams didn’t anticipate this shift. I can’t see why he shouldn’t have. Utilitarians are ever pragmatic and (should) understand that utilitarianism isn’t served by starting horrendous wars willy-nilly.
Then again, perhaps this is another harbinger of what Williams calls “utilitarianism ushering itself from the scene”. He believes that the practical problems of utilitarian ethics (from the perspective of an agent) will move utilitarianism more and more towards a system of total assessment. Here utilitarianism may demand certain things in the way of dispositions or virtues and certainly it will ask that the utility of the world be ever increased, but it will lose its distinctive character as a system that suggests actions be chosen in such a way as to maximize utility.
Williams calls this the transcendental viewpoint and pithily asks “if… utilitarianism has to vanish from making any distinctive mark in the world, being left only with the total assessment from the transcendental standpoint – then I leave if for discussion whether that shows that utilitarianism is unacceptable or merely that no one ought to accept it.”
This, I think, ignores the possibility that it might become easier in the future to calculate the utility of certain actions. The results of actions are inherently chaotic and difficult to judge, but then, so is the weather. Weather prediction has been made tractable by the application of vast computational power. Why not morality? Certainly, this can’t be impossible to envision. Iain M. Banks wrote a whole series of books about it!
Of course, if we wish to be utilitarian on a societal level, we must currently do so without the support of godlike AI. Which is what utilitarianism was invented for in the first place. Here it was attractive because it is minimally committed – it has no elaborate theological or philosophical commitments buttressing it, unlike contemporaneous systems (like Lockean natural law). There is something intuitive about the suggestion that a government should only be concerned for the welfare of the governed.
Sure, utilitarianism makes no demands on secondary principles, Williams writes, but it is extraordinarily demanding when it comes to empirical information. Utilitarianism requires clear, comprehensible, and non-cyclic preferences. For any glib rejoinders about mere implementation details, Williams has this to say:
[These problems are] seen in the light of a technical or practical difficulty and utilitarianism appeals to a frame of mind in which technical difficulty, even insuperable technical difficulty, is preferable to moral unclarity, no doubt because it is less alarming.
Williams suggests that the simplicity of utilitarianism isn’t a virtue, only indicative of “how little of the world’s luggage it is prepared to pick up”. By being immune to concerns of justice or fairness (except insofar as they are instrumentally useful to utilitarian ends), Williams believes that utilitarianism fails at many of the tasks that people desire from a government.
Personally, I’m not so sure a government commitment to fairness or justice is at all illuminating. There are currently at least two competing (and mutually exclusive) definitions of both fairness and justice in political discourse.
Should fairness be about giving everyone the same things? Or should it be about giving everyone the tools they need to have the same shot at meaningful (of course noting that meaningful is a societal construct) outcomes? Should justice mean taking into account mitigating factors and aiming for reconciliation? Or should it mean doing whatever is necessary to make recompense to the victim?
It is too easy to use fairness or justice as a sword without stopping to assess who it aimed at and what the consequences of the aim is (says the committed consequentialist). Fairness and justice are meaty topics that deserve better than to be thrown around as a platitudinous counterargument to utilitarianism.
A much better critique of utilitarian government can be made by imagining how such a government would respond to non-utilitarian concerns. Would it ignore them? Or would it seek to direct its citizens to have only non-utilitarian concerns? The latter idea seems practically impossible. The first raises important questions.
Imagine a government that is minimally responsive to non-utilitarian concerns. It primarily concerns itself with maximizing utility, but accepts the occasional non-utilitarian decision as the cost it must pay to remain in power (presume that the opposition is not utilitarian and would be very responsive to non-utilitarian concerns in a way that would reduce the global utility). This government must necessarily look very different to the utilitarian elite who understand what is going on and the masses who might be quite upset that the government feels obligated to ignore many of their dearly held concerns.
Could such an arrangement exist with a free media? With free elections? Democracies are notably less corrupt than autocracies, so there are significant advantages to having free elections and free media. But how, if those exist, does the utilitarian government propose to keep its secrets hidden from the population? And if the government was successful, how could it respect its citizens, so duped?
In addition to all that, there is the problem of calculating how to satisfy people’s preferences. Williams identifies three problems here:
How do you measure individual welfare?
To what extent is welfare comparative?
How do you develop the aggregate social preference given the answer to the proceeding two questions?
Williams seems to suggest that a naïve utilitarian approach involves what I’ve think is best summed up in a sick parody of Marx: from each according to how little they’ll miss it, to each according to how much they desire it. Surely there cannot be a worse incentive structure imaginable than the one naïve utilitarianism suggests?
When dealing with preferences, it is also the case that utilitarianism makes no distinction between fixing inequitable distributions that cause discontent or – as observed in America – convincing those affected by inequitable distributions not to feel discontent.
More problems arise around substitution or compensation. It may be more optimific for a roadway to be built one way than another and it may be more optimific for compensation to be offered to those who are affected, but it is unclear that the compensation will be at all worth it for those affected (to claim it would be, Williams declares, is “simply an extension of the dogma that every man has his price”). This is certainly hard for me to think about, even (or perhaps especially) because the common utilitarian response is a shrug – global utility must be maximized, after all.
Utilitarianism is about trade-offs. And some people have views which they hold to be beyond all trade-off. It is even possible for happiness to be buttressed or rest entirely upon principles – principles that when dearly and truly held cannot be traded-off against. Certainly, utilitarians can attempt to work around this – if such people are a minority, they will be happily trammelled by a utilitarian majority. But it is unclear what a utilitarian government could do in a such a case where the majority of their population is “afflicted” with deeply held non-utilitarian principles.
Williams sums this up as:
Perhaps humanity is not yet domesticated enough to confine itself to preferences which utilitarianism can handle without contradiction. If so, perhaps utilitarianism should lope off from an unprepared mankind to deal with problems it finds more tractable – such as that present by Smart… of a world which consists only of a solitary deluded sadist.
Finally, there’s the problem of people being terrible judges of what they want, or simply not understanding the effects of their preferences (as the Americas who rely on the ACA but want Obamacare to be repealed may find out). It is certainly hard to walk the line between respecting preferences people would have if they were better informed or truly understood the consequences of their desires and the common (leftist?) fallacy of assuming that everyone who held all of the information you have must necessarily have the same beliefs as you.
All of this combines to make Williams view utilitarianism as dangerously irresponsible as a system of public decision making. It assumes that preferences exist, that the method of collecting them doesn’t fail to capture meaningful preferences, that these preferences would be vindicated if implemented, and that there’s a way to trade-off among all preferences.
To the potential utilitarian rejoinder that half a loaf is better than none, he points out a partial version of utilitarianism is very vulnerable to the streetlight effect. It might be used where it can and therefore act to legitimize – as “real”– concerns in the areas where it can be used and delegitimize those where it is unsuitable. This can easily lead to the McNamara fallacy; deliberate ignorance of everything that cannot be quantified:
The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.
— Daniel Yankelovich “Corporate Priorities: A continuing study of the new demands on business.” (1972)
This isn’t even to mention something that every serious student of economics knows: that when dealing with complicated, idealized systems, it is not necessarily the non-ideal system that is closest to the ideal (out of all possible non-ideal systems) that has the most benefits of the ideal. Economists call this the “theory of the second best”. Perhaps ethicists might call it “common sense” when applied to their domain?
Williams ultimately doubts that systematic though is at all capable of dealing with the myriad complexities of political (and moral) life. He describes utilitarianism as “having too few thoughts and feelings to match the world as it really is”.
I disagree. Utilitarianism is hard, certainly. We do not agree on what happiness is, or how to determine which actions will most likely bring it, fine. Much of this comes from our messy inbuilt intuitions, intuitions that are not suited for the world as it now is. If utilitarianism is simple minded, surely every other moral system (or lack of system) must be as well.
In many ways, Williams did shake my faith in utilitarianism – making this an effective and worthwhile essay. He taught me to be fearful of eliminating from consideration all joys but those that the utilitarian can track. He drove me to question how one can advocate for any ethical system at all, denied the twin crutches of rationalism and theology. And he further shook my faith in individuals being able to do most aspects of the utilitarian moral calculus. I think I’ll have more to say on that last point in the future.
But by their actions you shall know the righteous. Utilitarians are currently at the forefront of global poverty reduction, disease eradication, animal suffering alleviation, and existential risk mitigation. What complexities of the world has every other ethical system missed to leave these critical tasks largely to utilitarians?
Williams gave me no answer to this. For all his beliefs that utilitarianism will have dire consequences when implemented, he has no proof to hand. And ultimately, consequences are what you need to convince a consequentialist.
There is perennial debate in Canada about whether we should allow a “two-tiered” healthcare system. The debate is a bit confusing – by many measures we already have a two-tiered system, with private clinics and private insurance – but ultimately hinges on the ability of doctors to mix fees. Currently it is illegal for a doctor to charge anything on top of the provincially mandated fee structure. If the province is willing to pay $3,000 for a procedure, you cannot charge $5,000 and ask your patients (or their insurance) to make up the difference.
But I oppose it because I’m pretty sure I know what the second order effects would be.
It is a truth universally acknowledged that an industry, temporarily in possession of good fortune, must be in want of a really good lobbyist to make that possession permeant.
This is how we end up with incredibly detailed tax and regulatory law. There are a whole bunch of exceptions and special cases, vigourously lobbied for by special interest groups. These make us all a bit worse off, but each exception makes a certain person or small group of people very much better off. They care far more about preserving their loophole or unfair advantage than we do about getting rid of it, so each petty annoyance persists. Except, the annoyances aren’t so petty anymore when there are hundreds or thousands of them.
I dearly don’t want to add any more “petty” annoyances to healthcare.
As soon as we allow doctors to mix public funding with direct payments from patients or insurance, we’ll unleash a storm of lobbying. Everything from favourable tax treatment for clinics (we don’t charge HST on provincial care, it’s unfair to charge it on their added fees!) to tax breaks for insurance, to inflated fees for private clinics to handle some public cases will be on the table.
If the lobbyists do their job well, the private system will perch like a mosquito on the public system, sucking tax dollars from the public purse and using them to subsidize private care. This offends me on a visceral level, sure. But it’s also bad policy. Healthcare costs are already outpacing general inflation; we should not risk throwing fuel on that fire. We might end up with having the same sort of cost disease as America.
If we can keep healthcare relatively simple, we can keep it relatively cheap. One of the most pernicious things about cost disease is that it mainly affects things the government pays for. Because of this, the government has to collect more and more tax dollars just to provide the same level of service. As long as healthcare, education, and real estate are getting more expensive in real (inflation adjusted terms), we have to choose between raising taxes or making do with less service. When there are two systems, it’s clear that the users of the private system (and their lobbyists) would prefer decreased public services to increased taxes.
When there is only the public system, we force the lion’s share of those who plan to lobby for better care to lobby for better care in the public system . This is true not just in healthcare; private schools are uncommon in most Canadian provinces. Want better school for your children? Try and improve the public schools.
There is always option to lobby for subsidies for private systems, but this has generally been unproductive when the public system is effective and entrenched. Two-tiered healthcare is back in the news because of a court case, not because any provincial government is committing political suicide by suggesting it. When it comes to schools, offering to subsidize private schools may have played a role in dooming John Tory’s bid for the premiership of Ontario in 2007.
I wonder if there isn’t some sort of critical mass thing that can happen. When the public system (be it healthcare, education, or anything else) is generally good, all but the wealthiest will use it. The few who use private systems won’t have the lobbying clout to bring about any specific advantages for their system, so there will be a stable equilibrium. Most people will use the public system and oppose changes to it, while the few who don’t won’t waste their time lobbying for changes (given the lack of any appetite for changes among the broader public).
If the public system gets substantially worse, those with the means to will leave the public system for the private. This would explain why generally liberal B.C. (with its decade of nasty labour disputes between the government and teachers) has much higher enrollment in private schools than in conservative and free-market-worshipping Alberta (which has poured decades of oil money largesse into its schools) .
Of course, the more people that use the private system, the more lobbying clout it gains. This model would predict that B.C. will begin to see substantial government concessions to private schools (although this could be confounded if the recent regime change proves durable). This model would also predict that if we open even a small crack in the unified public healthcare system, we’ll quickly see a private system emerge which will immediately lobby to be underwritten with public dollars.
From this point of view, one of the best things about public systems is that they force the best off to lobby for the worst off. Catch-all public systems yoke the interests of broad parts of society together, increasing access to important services.
If this model is true, then getting healthcare and education right are just the table stakes. It is vitally important that the provinces institute uniform rules and subsidies for embryo selection and future genetic engineering technologies. Because if they don’t, then in the words of Professor Jennifer Doudna, we will “transcribe our societies’ financial inequality into our genetic code”.
Both IVF and genetic screening are becoming easier and quicker. According to Gwern, it’s already likely a net positive to screen embryos for traits associated with higher later earnings (he lists seven currently screenable traits: IQ, height, BMI, and lack of diabetes, ADHD, bipolar disorder, and schizophrenia), with a net lifetime payoff estimated at $14,653 . Unfortunately, this payoff is only available to parents who can afford the IVF and the screening.
Recently, Ontario began covering one round of IVF for couples unable to conceive. This specifically doesn’t include any genetic testing or pre-implantation diagnosis, which means that if we see a drop in heritable genetic diseases in the next generation, that drop will only be among the better off. Hell, even though Ontario already “covers” one round of IVF, they don’t cover any of the necessary fertility drugs, which means that IVF costs about $5,000 out of pocket. This is already outside the reach of many Ontarians.
Not a lot of people are running analyses like Gwern’s. Yet. We still have time to fix the coverage gap for IVF and put in place a publicly funded embryo selection program. If we wait too long here, we’ll be caught flat footed. The most effective way for rich people to get the reproductive services they will want wouldl be by lobbying for tax breaks and help for their private system, not for the improvement of a good-enough public system.
There’s a risk here of course. IVF isn’t particularly fun. It might be that the people with the longest time horizons (who are perhaps likely to be advantaged in other ways) will be the only ones who would use a public embryo selection system. This would have the effect of subsidizing embryo selection for whichever groups have the longest time horizons and the most ability to endure short-term discomfort for long term payoff.
But anything less than a public option on embryo selection makes entrenching social divides as genetic divides almost inevitable. We could ban all non-medical embryo selection, which, as Gwern points out, would just move it to China. Or Singapore . Or even America. This would shrink the problem, in that fewer people would have access to embryo selection, but wouldn’t stop it altogether.
Embryo selection is just the beginning here too. Soon enough, we’ll see treatments for genetic diseases using CRISPR. Hot on the heels of that, we’ll see enhancements. Well, we ostensibly won’t in Canada, at least without some amendments to the Assisted Human Reproduction Act , which bans changes to the DNA of germline cells. I say “ostensibly” because it’s the height of naivety to assume that you can end demand simply by banning something, but then, that’s Canada for you.
The advent of CRISPR should usher in a sudden surge in genetically engineered humans. Parents will optimize for intelligence, height, and lower disease risk/load. It will be legal somewhere and therefore some Canadians will do it. If we have a legal, public system in Canada, then it will be available to anyone who wants it. If we don’t, then it will become very hard for the children of normal Canadians to compete with the children of our elites.
Throughout this post, I’ve assumed cost is no object. That’s probably a bad assumption. We’re talking about horrendously expensive voluntary medical procedures here. Gwern gives the cost of an IVF cycle with embryo selection at $22,000. There are 393,000 babies born in Canada every year. If this technology was both subsidized and adopted by 10% of all parents seeking to conceive, the total cost would be something like $864 million, or an increase in total healthcare spending of about 0.4%. Given that healthcare spending is allowed to grow by 3% per year, this would eat up more than 10% of the total yearly increase.
I’m not holding my breath for that sort of new spending on reproductive medicine. A more practical system would probably be a lottery, with enough spots for 1% of prospective parents. That has a more reasonable price tag of $86.4 million. While they’re at it, the government could start paying surrogates, egg donors, and sperm donors and institute a similar lottery there. I can dream about Canada having a functional fertility services industry, right?
A lottery isn’t my preferred solutions. Wealthy people who put their name in and aren’t drawn will still go elsewhere. But it could help with the lobbying problem. A lottery establishes a plausible path towards a broader system, which people would at least consider lobbying to expand. It won’t capture everyone. It might not even capture a majority. But if an expanded public system is the most palatable system politically, it might just win in the long run.
If you take just one thing from this post, I want it to be “it’s really important to have good public systems, so that lobbying effort is focused on improving those systems”. If you have room in your mind for another, it should be “having a public embryo selection and genetic engineering program in place is very important if we don’t want to social stratification to become much more permanent”.
 In this post, I’m talking about industries where there is either a clear need to serve the public good, a market failure, or both. In these cases, “use markets to lower prices and increase services” is an unappealing alternative. ^
 This would also predict that America, with its cluster-fuck of a public school system would have generally higher rates of private schooling than neighbouring (and better performing on standardized tests) Canada. This is true – ten percent of American children are in private schools, compared to eight percent of Canadians. I think there is a smaller gap between the two then there otherwise might be, due to the extreme heterogeneity of American schooling. That is to say that Canadian public schools might be better than American public schools on average, but everything I’ve heard suggests that the standard deviation is much higher in America. Well off students going to good public schools may account for why America’s private school enrollment isn’t higher. ^
 This number will get higher and higher as we better understand the genetic determinants of IQ. ^
 This bill could perhaps be more truthfully be called the No Assisted Human Reproduction Act. In addition to banning germline genetic engineering, it also bans any paid surrogacy, egg donation, or sperm donation. This had the predictable effect of inconveniencing the wealthy not at all, while making it impossible for anyone else to find any surrogates, egg donors, or anonymous sperm donors. With a side-helping of encouraging surrogacy in countries where surrogates have the fewest legal protection (remember, my whole thesis here is that if you don’t give people a good pro-social option, they often optimize for maximum personal gain). ^
Utilitarianism for and against is an interesting little book. It’s comprised of back-to-back ~70 page essays, one in favour of utilitarianism and one opposed. As an overview, it’s hard to beat something like this. You don’t have to rely on one scholar to give you her (ostensibly fair and balanced) opinion; you get two articulate philosophers arguing their side as best they can. Fair and balanced is by necessity left as an exercise to the reader (honestly, it always is; here at least it’s explicit).
I’m going to cover the “for” side first. The “against” side will be in later blog post. Both reviews are going to assume that you have some understanding of utilitarianism. If you don’t, go read my primer. Or be prepared to Google. I should also mention that I have no aspirations of being balanced myself. I’m a utilitarian; I had much more to disagree with on the “against” side than on the “for” side.
Professor J.J.C Smart makes the arguments in favour of utilitarianism. According to his Wikipedia entry, he was known for “outsmarting” his opponents, that is to say, accepting the conclusions of their reductio ad absurdum arguments with nary a shrug. He was, I’ve gathered, not one for moral intuitions. His criticism of rule utilitarianism played a role in its decline and he was influential in raising the next crop of Australian utilitarians, among whom Peter Singer is counted. As near as I can tell, he was one of the more notable defenders of utilitarianism when this volume was published in 1971 (although much of his essay dates back a decade earlier).
Smart is emphatically not a rationalist (in the philosophical sense); he writes no “proof of utilitarianism” and denies that such a proof is even possible. Instead, Smart restricts himself to explaining how utilitarianism is an attractive ethical system for anyone possessed of general benevolence. Well, I’ll say “everyone”. The authors of this volume seem to be labouring under the delusion that only men have ethical dilemmas or the need for ethical systems. Neither one of them manages the ethicist’s coup of realizing that women might be viewed as full people at the remove of half a century from their time of writing (such a coup would perhaps have been strong evidence of the superiority of one philosophy over another).
A lot of Smart’s essay consists of showing how various different types of utilitarianism are all the same under the hood. I’ve termed these “collapses”, although “isomorphisms” might be a better term. There are six collapses in all.
The very first collapse put me to mind of the famous adage about ducks. If it walks like a duck, swims like a duck, and quacks like a duck, it is a duck. By the same token, if someone acts exactly how a utilitarian in their position and with their information would act, then it doesn’t matter if they are a utilitarian or not. From the point of view of an ethical system that cares only about consequences they may as well be.
The next collapse deals with rule utilitarianism and may have a lot to do with its philosophical collapse. Smart points out that if you are avoiding “rule worship”, then you will face a quandary when you could break a rule in such a way as to gain more utility. Rule utilitarians sometimes claim that you just need rules with lots of exceptions and special cases. Smart points out that if you carry this through to its logical conclusion, you really are only left with one rule, the meta-rule of “maximize expected utility”. In this way, rule utilitarianism collapses into act utilitarianism.
Next into the compactor is the difference between ideal and hedonic utilitarians. Briefly, ideal utilitarians hold that some states of mind are inherently valuable (in a utilitarian sense), even if they aren’t particularly pleasant from the inside. “Better Socrates dissatisfied than a fool satisfied” is the rallying cry of ideal utilitarians. Hedonic utilitarians have no terminal values beyond happiness; they would gladly let almost the entirety of the human race wirehead.
Smart claims that while these differences are philosophically large, they are practically much less meaningful. Here Smart introduces the idea of the fecundity of a pleasure. A doctor taking joy (or grim satisfaction) in saving a life is a much more fecund pleasure than a gambler’s excitement at a good throw, because it brings about greater joy once you take into account everyone around the actor. Many of the other pleasures (like writing or other intellectual pursuits) that ideal utilitarians value are similarly fecund. They either lead to abatement of suffering (the intellectual pursuits of scientists) or to many people’s pleasure (the labour of the poet). Taking into account fecundity, it was better for Smart to write this essay than to wirehead himself, because many other people – like me – get to enjoy his writing and have fun thinking over the thorny issues he raises.
Smart could have stood to examine at greater length just why ideal utilitarians value the things they do. I think there’s a decent case to be made that societies figure out ways to value certain (likely fecund) pleasures all on their own, no philosophers required. It is not, I think, that ideal utilitarians have stumbled onto certain higher pleasures that they should coax their societies into valuing. Instead, their societies have inculcated them with a set of valued activities, which, due to cultural evolution, happen to line up well with fecund pleasures. This is why it feels difficult to argue with the list of pleasures ideal utilitarians proffer; it’s not that they’ve stumbled onto deep philosophical truths via reason alone, it’s that we have the same inculcations they do.
Beyond simple fecundity though, there is the fact that the choice between Socrates dissatisfied and a fool satisfied rarely comes up. Smart has a great line about this:
But even the most avid television addict probably enjoys solving practical problems connected with his car, his furniture, or his garden. However unintellectual he might be, he would certainly resist the suggestion the he should, if it were possible, change places with a contented sheep, or even a happy and lively dog.
This boils down to: ‘ideal utilitarians assume they’re a lot better than everyone else, what with their “philosophical pursuits”, but most people don’t want purely mindless pleasures’. Combined, these ideas of fecundity and hidden depths, point to a vanishingly small gap between ideal and hedonistic utilitarians, especially compared to the gap between utilitarians and practitioners of other ethical systems.
After dealing with questions of how highly we should weigh some pleasures, Smart turns to address the idea of some pleasures not counting at all. Take, for example, the pleasure that a sadist takes in torturing a victim. Should we count this pleasure in our utilitarian moral calculus? Smart says yes, for reasons that again boil down to “certain pleasures being viewed as bad are an artifact of culture; no pleasure is intrinsically bad.”
(Note however that this isn’t the same thing as Smart condoning the torture. He would say that the torture is wrong because the pleasure the sadist gains from it cannot make up for the distress of the victim. Given that no one has ever found a real live utility monster, this seems a safe position to take.)
In service of this, Smart presents a thought experiment. Imagine a barren universe inhabited by a single sentient being. This sentient being wrongly believes that there are many other inhabitants of the universe being gruesomely tortured and takes great pleasure in this thought. Would the universe be better if the being didn’t derive pleasure from her misapprehension?
The answer here for both Smart and me is no (although I suspect many might disagree with us). Smart reasons (almost tautologically) that since there is no one for this being to hurt, her predilection for torture can’t hurt anyone. We are rightfully wary of people who unselfconsciously enjoy the thought of innocents being tortured because of what it says about what their hobbies might be. But if they cannot hurt anyone, their obsession is literally harmless. This bleak world would not be better served by its single sentient inhabitant quailing at the thought of the imaginary torture.
Of course, there’s a wide gap between the inhabitant curled up in a ball mourning the torture she wrongly believes to be ongoing and her simple ambivalence to it. It seems plausible that many people could consider her ambivalence preferable, even if they did not wish her to be sad. But imagine then the difference being between her lonely and bored and her satisfied and happy (leaving aside for a moment the torture). It is clear here which is the better universe. Given a way to move from the universe with a single bored being to the one with a single fulfilled being, shouldn’t we take it, given that the shift most literally harms no one?
This brings us to the distinction between intrinsically bad pleasures and extrinsically bad pleasures – the flip side of the intrinsically more valuable states of mind of the ideal utilitarian. Intrinsically bad pleasures are pleasures that for some rationalist or metaphysical reason are just wrong. Their rightness or wrongness must of course be vulnerable to attacks on the underlying logic or theology, but I can hardly embark on a survey of common objections to all the common underpinnings; I haven’t the time. But many people have undertaken those critiques and many will in the future, making a belief in intrinsically bad pleasures a most unstable place to stand.
Extrinsically bad pleasures seem like a much safer proposition (and much more convenient to the utilitarian who wishes to keep their ethical system free of meta-physical or meta-ethical baggage). To say that a pleasure is extrinsically bad is simply to say that to enjoy it causes so much misery that it will practically never be moral to experience it. Similar to how I described ideal utilitarian values as heavily culturally influenced, I can’t help but feel that seeing some pleasures as intrinsically bad has to be the result of some cultural conditioning.
If we can accept that certain pleasures are not intrinsically good or ill, but that many pleasures that are thought of as intrinsically good or ill are thought so because of long cultural experience – positive or negative – with the consequences of seeking them out, then we should see the position of utilitarians who believe that some pleasures cannot be counted in the plus column collapse to approximately the same as those who hold that they can, even if neither accepts the position of the other. The utilitarian who refuses to believe in intrinsically bad pleasures should still condemn most of the same actions as one who does, because she knows that these pleasures will be outweighed by the pains they inflict on others (like the pain of the torture victim overwhelming the joy of the torturer).
There is a further advantage to holding that pleasures cannot be intrinsically wrong. If we accept the post-modernists adage that knowledge is created culturally, we will remember to be skeptical of the universality of our knowledge. That is to say, if you hold a list of intrinsically bad pleasures, it will probably not be an exhaustive list and there may be pleasures whose ill-effects you overlook because you are culturally conditioned to overlook them. A more thoughtful utilitarian who doesn’t take the short-cut of deeming some pleasures intrinsically bad can catch these consequences and correctly advocate against these ultimately wrong actions.
The penultimate collapse is perhaps the least well supported by arguments. In a scant page, Smart addresses the differences between total and average happiness in a most unsatisfactory fashion. He asks which of two universes you might prefer: one with one million happy, healthy people, or one with twice as many people, equally happy and healthy. Both Smart and I feel drawn to the larger universe, but he has no arguments for people who prefer the smaller. Smart skips over the difficulties here with an airy statement of “often the best way to increase the average happiness is to increase the total happiness and vice versa”.
I’m not entirely sure this statement is true. How would one go about proving it?
Certainly, average happiness seems to miss out on the (to me) obvious good that you’d get if you could have twice as many happy people (which is clearly one case where they give different answers), but like Smart, I have trouble coming up with a persuasive argument why that is obviously good.
I do have one important thing myself to say about the difference between average and total happiness. When I imagine a world with more people who are on average less happy than the people that currently exist (but collectively experience a greater total happiness) I feel an internal flinch.
Unfortunately for my moral intuitions, I feel the exact same flinch when I image a world with many fewer people, who are on average transcendentally happy. We can fiddle with the math to make this scenario come out to have greater average and total happiness than the current world. Doesn’t matter. Exact same flinch.
This leads me to believe that my moral intuitions have a strong status quo bias. The presence of a status quo bias in itself isn’t an argument for either total or average utilitarianism, but it is a reminder to be intensely skeptical of our response to thought experiments that involve changing the status quo and even to be wary of the order that options are presented in.
The final collapse Smart introduces is that between regular utilitarians and negative utilitarians. Negative utilitarians believe that only suffering is morally relevant and that the most important moral actions are those that have the consequence of reducing suffering. Smart points out that you can raise both the total and average happiness of a population by reducing suffering and furthermore that there is widespread agreement on what reduces suffering. So Smart expects utilitarians of all kinds (including negative) to primarily focus on reducing suffering anyway. Basically, despite the profound philosophical differences between regular and negative utilitarians, we should expect them to behave equivalently. Which, by the very first collapse (if it walks like a duck…), shows that we can treat them as philosophical equivalents, at least in the present world.
In my experience, this is more or less true. Many of the negative utilitarians I am aware of mainly exercise their ethics by donating 10% of their income to GiveWell’s most effective charities. The regular utilitarians… do the exact same. Quack.
As far as I can tell, Smart goes to all this work to show how many forms of utilitarianism collapse together so that he can present a system that isn’t at war with itself. Being able to portray utilitarianism as a simple, unified system (despite the many ways of doing it) heads off many simple criticisms.
While I doubt many people avoided utilitarianism because there are lingering questions about total versus average happiness, per se, these little things add up. Saying “yes, there are a bunch of little implementation details that aren’t agreed upon” is a bad start to an ethical system, unless you can immediately follow it up with “but here’s fifty pages of why that doesn’t matter and you can just do what comes naturally to you (under the aegis of utilitarianism)”.
Let’s talk a bit about what comes naturally to people outside the context of different forms of utilitarianism. No one, not even Smart, sits down and does utilitarian calculus before making every little decision. For most tasks, we can ignore the ethical considerations (e.g. there is broad, although probably not universal agreement that there aren’t hidden moral dimensions to opening a door). For some others, our instincts are good enough. Should you thank the woman at the grocery store checkout? You probably will automatically, without pausing to consider if it will increase the total (or average) happiness of the world.
Like in the case of thanking random service industry workers, there are a variety of cases where we actually have pretty good rules of thumb. These rules of thumbs serve two purposes. First, they allow us to avoid spending all of our time contemplating if our actions are right or wrong, freeing us to actually act. Second, they protect us from doing bad things out of pettiness or venality. If you have a strong rule of thumb that violence is an inappropriate response to speech you disagree with, you’re less likely to talk yourself into punching an odious speaker in the face when confronted with them.
It’s obviously important to pick the right heuristics. You want to pick the ones that most often lead towards the right outcomes.
I say “heuristics” and “rules of thumbs” because the thing about utilitarians and rules is that they always have to be prepared to break them. Rules exist for the common cases. Utilitarians have to be on guard for the uncommon cases, the ones where breaking a rule leads to greater good overall. Having a “don’t cause people to die” rule is all well and good. But you need to be prepared to break it if you can only stop mass death from a runaway trolley by pushing an appropriately sized person in front of it.
Smart seems to think that utilitarianism only comes up for deliberative actions, where you take the time to think about them and that it shouldn’t necessarily cover your habits. This seems like an abrogation to me. Shouldn’t a clever utilitarian, realizing that she only uses utilitarianism for big decisions spend some time training her reflexes to more often give the correct utilitarian solution, while also training herself to be more careful of her rules of thumb and think ethically more often? Smart gave no indication that he thinks this is the case.
The discussion of rules gives Smart the opportunity to introduce a utilitarian vocabulary. An action is right if it is the one that maximizes expected happiness (crucially, this is a summation across many probabilities and isn’t necessarily the action that will maximize the chance of the happiest outcome) and wrong otherwise. An action is rational if a logical being in possession of all the information you possess would think you to be right if you did it. All other actions are irrational. A rule of thumb, disposition, or action is good if it tends to lead to the right outcomes and bad if it tends to lead to the wrong ones.
This vocabulary becomes important when Smart talks about praise, which he believes is an important utilitarian concern in its own right. Praise increases people’s propensity towards certain actions or dispositions, so Smart believes a utilitarian aught to consider if the world would be better served by more of the same before she praises anything. This leads to Smart suggesting that utilitarians should praise actions that are good or rational even if they aren’t right.
It also implies that utilitarians doing the right thing must be open to criticism if it requires bad actions. One example Smart gives is a utilitarian Frenchman cheating on wartime rationing in 1940s England. The Frenchman knows that the Brits are too patriotic to cheat, so his action (and the actions of the few others that cheat) will probably fall below the threshold for causing any real harm, while making him (and the other cheaters) happier. The calculus comes out positive and the Frenchman believes it to be the right action. Smart acknowledges that this logic is correct, but he points out that by the similar logic, the Frenchman should agree that he must be severely punished if caught, so as to discourage others from doing the same thing.
This actually reminds me of something Hannah Arendt brushed up against in Eichmann in Jerusalem while talking about how the moral constraints on people are different than the ones on states. She gives the example of Soghomon Tehlirian, the Armenian exile who assassinated one of the triumvirate of Turkish generals responsible for the Armenian genocide. Arendt believes that it would have been wrong for the Armenian government to assassinate the general (had one even existed at the time), but that it was right for a private citizen to do the deed, especially given that Tehlirian did not seek to hide his crimes or resist arrest.
From a utilitarian point of view, the argument would go something like this: political assassinations are bad, in that they tend to cause upheaval, chaos, and ultimately suffering. On the other hand, there are some leaders who the world would clearly be better off without, if not to stop their ill deeds in their tracks, then to strike fear and moderation into the hearts of similar leaders.
Were the government of any country to carry out these assassinations, it would undermine the government’s ability to police murder. But when a private individual does the deed and then immediately gives herself up into the waiting arms of justice, the utility of the world is increased. If she has erred in picking her target and no one finds the assassination justified, then she will be promptly punished, disincentivizing copy-cats. If instead, like Tehlirian, she is found not guilty, it will only be because the crimes committed by the leader she assassinated were so brutal and clear that no reasonable person could countenance them. This too sends a signal.
That said, I think Smart takes his distinctions between right and good a bit too far. He cautions against trying to change the non-utilitarian morality of anyone who already tends towards good actions, because this might fail half-way, weakening their morality without instilling a new one. Likewise, he is skeptical of any attempt to change the traditions of a society.
This feels too much like trying to have your cake and eat it too. Utilitarianism can be criticized because it is an evangelical ethical system that gives results far from moral intuitions in some cases. From a utilitarian point of view, it is fairly clearly good to have more utilitarians willing to hoover up these counter-intuitive sources of utility. If all you care about are the ends, you want more people to care about the best ends!
If the best way to achieve utilitarian ends wasn’t through utilitarianism, then we’re left with a self-defeating moral system. In trying to defend utilitarianism from the weak critique that it is pushy and evangelical, both in ways that are repugnant to all who engage in cultural or individual ethical relativism and in ways that are repugnant to some moral intuitions, Smart opens it up to the much stronger critique that it is incoherent!
Smart by turns seems to seek to rescue some commonly held moral truths when they conflict with utilitarianism while rejecting others that seem no less contradictory. I can hardly say that he seems keen to show utilitarianism is in fact in harmony with how people normally act – he clearly isn’t. But he also doesn’t always go all (or even part of) the way in choosing utilitarianism over moral intuitions
Near the end of the book, when talking about a thought experiment introduced by one McCloskey, Smart admits that the only utilitarian action is to frame and execute an innocent man, thereby preventing a riot. McCloskey anticipated him, saying: “But as far as I know, only J.J.C. Smart among the contemporary utilitarians is happy to adopt this ‘solution'”.
Here I must lodge a mild protest. McCloskey’s use of the work ‘happy’ surely makes me look a most reprehensible person. Even in my most utilitarian moods, I am not happy about this consequence of utilitarianism… since any injustice causes misery and so can be justified only as the lesser of two evils, the fewer the situation in which the utilitarian is forced to choose the lesser of two evils, the better he will be pleased.
This is also the man who said (much as I have) that “admittedly utilitarianism does have consequences which are incompatible with the common moral consciousness, but I tended to take the view ‘so much the worse for the common moral consciousness’.”
All this leaves me baffled. Why the strange mixture? Sometimes Smart goes far further than it seems any of his contemporaries would have. Other times, he stops short of what seems to me the truly utilitarian solution.
On the criticism that utilitarianism compels us always in moral action, leaving us no time to relax, he offers two responses. The first is that perhaps people are too unwilling to act and would be better served by being more spurred on. The second is that it may be that relaxing today allows us to do ten times the good tomorrow.
But take this and his support for rules of thumb on one side and his support for executing the innocent man, or long spiel on how a bunch of people wireheading wouldn’t be that bad (a spiel that convinced me, I might add) and I’m left with an unclear overall picture. As an all-is-fine defence of utilitarianism, it doesn’t go far enough. As a bracing lecture about our degenerate non-utilitarian ways, it also doesn’t go far enough.
Leaving, I suppose, the sincere views of a man who pondered utilitarianism for much longer than I have. Chance is the only reason that makes sense. This would imply that sometimes Smart gives a nod to traditional morality because he’s decided it aligns with his utilitarian ethics. Other times, he disagrees. At length. Maybe Smart is a man seeking to rescue what precious moral truths he can from the house fire that is utilitarianism.
Perhaps some of my confusion comes from another confusion, one that seems to have subtly infected many utilitarians. Smart is careful to point out that the atomic belief underlying utilitarianism is general benevolence. Benevolence, note, is not altruism. The individual utilitarian matters just as much – or as little – as everyone else. Utilitarians in Smart’s framework have no obligation to run themselves ragged for another. Trading your happiness for another’s will only ever be an ethically neutral act to the utilitarian.
Or, I suspect, the wrong one. You are best placed to know yourself and best placed to create happiness for yourself. It makes sense to include some sort of bias towards your own happiness to take this into account. Or, if this feels icky to you, you could handle it at the level of probabilities. You are more likely to make yourself happy than someone else (assuming you’ve put some effort towards understanding what makes you happy). If you are 80% likely to make yourself happy for an evening and 60% likely to make someone else happy, your clear utilitarian duty is to yourself.
This is not a suggestion to go become a hermit. Social interactions are very rarely as zero sum as all that. It might be that the best way to make yourself happy is to go help a friend. Or to go to a party with several people you know. But I have seen people risk burnout (and have risked it myself) by assuming it is wrong to take any time for themselves when they have friends in need.
This is all my own thoughts, not Smart’s. For all of his talk of utilitarianism, he offers little advice on how to make it a practically useful system. All too often, Smart retreats to the idea of measuring the total utility of a society or world. This presents a host of problems and begs two important questions.
First, can utility be accurately quantified? Smart tries to show that different ways of measuring utility should be roughly equivalent in qualitative terms, but it is unclear if this follows at a quantitative level. Stability analysis (where you see how sensitive your result is to different starting assumptions) is an important tool for checking the veracity of conclusions in engineering projects. I have a hunch that quantitatively, utilitarian results to many problems will be highly unstable when a variety of forms of utilitarianism are tried.
Second, how should we deal with utility in the future? Smart claims that beyond a certain point we can ignore side effects (as unintended good side effects should cancel out unintended ill side effects; this is especially important when it comes to things like saving lives) but that doesn’t give us any advice on how we can estimate effects.
We are perhaps saved here by the same collapse that aligned normal utilitarians with negative utilitarians. If we cannot quantify joy, we can sure quantify misery. Doctors can tell you just how much quality of life a disease can sap (there are tables for this), not to mention the chances that a disease might end a life outright. We know the rates of absolute poverty, maternal deaths, and malaria prevalence. There is more than enough misery in the world to go around and certainly utilitarians who focus on ending misery do not seem to be at risk of being out an ethical duty any time in the near future.
(If ending misery is important to you, might I suggest donating a fraction of your monthly income to one of GiveWell’s top recommended charities? These are the charities that most effectively use money to reduce suffering. If you care about maximizing your impact, GiveWell is a good way to do it.)
Although speaking of the future, I find it striking how little utilitarianism has changed in the fifty-six years since Smart first wrote his essay. He pauses to comment on the risk of a recursively self-improving AI and talk about the potential future moral battles over factory farming. I’m part of a utilitarian meme group and these are the same topics people joke about every day. It is unclear if these are topics that utilitarianism predisposes people to care about, or if there was some indirect cultural transmission of these concerns over the intervening years.
There are many more gems – and frustrations in Smart’s essay. I can’t cover them all without writing a pale imitation of his words, so I shan’t try any more. As an introduction to the different types of utilitarianism, this essay was better than any other introduction I’ve read, especially because it shows all of the ways that various utilitarian systems fit together.
As a defense of utilitarianism, it is comprehensive and pragmatic. It doesn’t seek to please everyone and doesn’t seek to prove utilitarianism. It lays out the advantages of utilitarianism clearly, in plain language, and shows how the disadvantages are not as great as might be imagined. I can see it being persuasive to anyone considering utilitarianism, although in this it is hampered by its position as the first essay in the collection. Anyone convinced by it must then read through another seventy pages of arguments against utilitarianism, which will perhaps leave them rather less convinced.
As a work of academic philosophy, it’s interesting. There’s almost no meta-ethics or meta-physics here. This is a defense written entirely on its own, without recourse to underlying frameworks that might be separately undermined. Smart’s insistence on laying out his arguments plainly leaves him little room to retreat (except around average vs. total happiness). I’ve always found this a useful type of writing; even when I don’t agree, the ways that I disagree with clearly articulated theses can be illuminating.
It’s a pleasant read. I’ve had mostly good luck reading academic philosophy. This book wasn’t a struggle to wade through and it contained the occasional amusing turn of phrase. Smart is neither dry lecturer nor frothing polemicizer. One is put almost in the mind of a kindly uncle, patiently explaining his way through a complex, but not needlessly complicated subject. I highly recommend reading it and its companion.
I identify so strongly as a person who writes daily that I sometimes find myself bowled over by the fact that I haven’t always done it.
Since my first attempt to write a novel (at age 13), I’ve known that I really enjoy writing. The problem was that I could never really get myself to write. I managed the occasional short story for a contest and I pulled off NaNoWriMo when I was 20, but even after that, writing remained something that happened almost at random. Even when I had something I really wanted to write it was a toss-up as to whether I would be able to sit down and get it on a page.
This continued for a while. Up until January 1st, 2015, I had written maybe 100,000 words. Since then, I’ve written something like 650,000. If your first million words suck – as is commonly claimed – then I’m ¾ of the way to writing non-sucking words.
What changed in 2015? I made a New Year’s Resolution to write more. And then, when that began to fall apart a few months later (as almost all New Year’s Resolutions do), I sought out better commitment devices.
Did you read my first paragraph and feel like it describes you? Do you want to stop trying to write and start actually writing? If your brain works like mine, you can use what I’ve learned to skip over (some of) the failing part and go right to the writing every single day part .
Step 1: Cultivate Love
I like having completed writing projects to show off as much as the next person, but I also enjoy the act of writing. If you don’t actually enjoy writing, you may have a problem. My techniques are designed to help people (like me) who genuinely enjoy writing once they get going but have trouble forcing themselves to even start.
If you find writing to be a grim chore, but want to enjoy writing so that you can have the social or financial benefits (heh) of writing, then it will be much harder for you to write regularly. If you aren’t sure if this describes you or not, pause and ask yourself: would writing every day still be worth it if no one ever read what I wrote and I never made a single cent off of it? There’s nothing wrong with preferring that people read what you write and preferring to make money off of writing if possible, but it is very helpful if you’re willing to write even without external validation.
Writing (at least partially) for the sake of writing means that you won’t become discouraged if your writing never “takes off”. Almost no one sees success (measured in book deals, blog traffic, or Amazon downloads) right away. So being able to keep going in the face of the world’s utter indifference is a key determinant of how robust your writing habit will be.
If you don’t like writing for its own sake, don’t despair completely. It’s possible you might come to love it if you spend more time on it. As you start to write regularly, try out lots of things and figure out what you like and dislike. It can be hard to tell the difference between not liking writing and not liking the types of writing you’ve done.
For example, I’m a really exploratory writer. I’ve found that I don’t enjoy writing if there’s a strict outline I’m trying to follow or if I’m constrained by something someone else has written. Fanfiction is one of the common ways that new writers develop their skills, but I really dislike writing fanfiction. Realizing this has allowed me to avoid a bunch of writing that I’d find tedious. Tedious writing is a big risk to your ability to write daily, so if you can reasonably avoid it, you should.
Step 2: Start Small
When learning a new skill or acquiring a new habit, it’s really tempting to try and dive right in and do everything at once. I’d like to strongly discourage this sort of thing. If you get overwhelmed right at the start you’re unlikely to keep with it. Sometimes jumping right into the deep end teaches you to swim, sure. But sometimes you drown. Or develop a fear of water.
It isn’t enough to set things up so that you’ll be fine if everything goes as planned. A good starting level is something that won’t be hard even if life gets in the way. Is your starting goal achievable even if you had to work overtime for the next two weeks? If not, consider toning it down a bit.
You should set a measurable, achievable, and atomic goal. In practice, measurable means numeric, so I’d recommend committing to a specific number of words each day or a specific amount of daily time writing. Here Beeminder will be your best friend .
Beeminder is a service that helps you bind your future self to your current goals. You set up a goal (like writing 100,000 words) and a desired daily progress (say, 200 words each day) towards that goal. Each day, Beeminder will make sure you’ve made enough progress towards your desired end-state. If you haven’t, Beeminder charges your credit card (you can choose to pay anywhere from $5 to $2430). Fail again and it charges you more (up to a point; you can set your own maximum). In this way, Beeminder can “sting” you into completing your goals.
For the first few months of my writing habit, I tracked my daily words in a notebook. This fell apart during my final exams. I brought in Beeminder at the start of the next month to salvage the habit and it worked like a charm. Beeminder provided me a daily kick in the pants to get writing; it made me unlikely to skip writing out of laziness, tiredness, or lack of a good idea.
Beeminder only works for numeric goals, so there’s the first of the triad I mentioned covered.
Next, your goal should be achievable; something you have no doubt you can do. Not something some idealized, better, or perfect version of you could do. Something you, with all your constraints and flaws are sure you can achieve. Don’t worry about making this too small. Fifty or one hundred words per day is a perfectly adequate starter goal.
Lastly, atomic. Atomic goals can’t be broken down any further. Don’t start by Beeminding blog posts or gods forfend, novels! Pick the smallest unit of writing you can, probably either time or word count, and make your goal about this. When you’re Beeminding words or time, you can’t fail and get discouraged for lack of ideas or “writer’s block” . It’s much better to spend a week writing detailed journals of every day (or even a detailed description of your bedroom) than it is to spend a week not writing because you can’t think of what to write.
My recommended starter goals are either: write 150 words each day or write 15 minutes each day. Both of these are easy to Beeminder and should be easy for most people to achieve.
Step 3: Acquire Confidence
Even with goals that easy, your first few days or weeks may very well be spent just barely meeting them, perhaps as Beeminder breaths down your neck. Writing is like exercise. It’s surprising how hard it can be to do it every day if you’re starting from nothing.
Here’s the start of my very first Beeminder writing goal. You’ll notice that I started slowly, panicked and wrote a lot, then ran into trouble and realized that I needed to tone things down a bit. It wasn’t until almost four months in that I finally hit my stride and started to regularly exceed my goal.
You can see a similar pattern when I started Beeminding fiction:
And when I started Beeminding time spent writing:
Those little spurs three data points into the time graph and seven into the fiction one? That’s where I failed to keep up and ended up giving Beeminder money. They call this “derailing”.
It may take a few derailments, but you should eventually find yourself routinely exceeding your starting goal (if you don’t, either this advice doesn’t work well for you, or you set your original goal too high). Be careful of allowing success to ruin your habit; try and write at least X words each day, not X words each day on average over the course of a week.
The number of days before you derail on a goal in Beeminder is called “safety buffer”. For outputs you intend to Beemind daily, I recommend setting yourself up so that you can have no more than two days of safety buffer. This lets you save up some writing for a busy day or two, but doesn’t let you skip a whole week. If you have a premium plan, Beeminder allows you to automatically cap your safety buffer, but you can also do it manually if you’re disciplined (I did this for many months until I could afford a premium plan).
When you get to the point of regularly trimming your safety buffer you’re almost ready to move on up. Once you’re really, really sure you can handle more (i.e. exceeded your minimum every day for two weeks), slowly increase your commitment. You don’t want to get too cocky here. If you’re currently aiming for 150 words/day and 9 days out of 10 you write 250, set your new goal to 200, not 250. You want to feel like you’re successfully and competently meeting your goal, not like you’re scrapping by by the skin of your teeth.
Step 4: Make Molecules
Once you become comfortable with your atomic goals and find stable long term resting spots for them, you can start to Beemind more complex outputs. This is using Beeminder to directly push you towards your goals. Want to work on your blog? Beemind blog posts. Want to work on a book? Beemind pages or chapters or scenes. Want to keep a record of your life? Beemind weekly journals.
These are all complicated outputs made up of many words or minutes of writing. You won’t finish them as regularly. It’s easy to sit down and crank out enough words in an hour to hit most word count goals. But these larger outputs might not be achievable in a single day, especially if you have work or family commitments. That’s why you want your writing habit well established and predictable by the time you take them on.
Remember, you don’t want to set yourself up for failure if it’s at all avoidable. Don’t take on a more complex output as a Beeminder goal until you have a sense of how long it will take you to produce each unit of it and always start at a rate where you’re sure you can deliver. Had a few weeks of finishing one chapter a week? Start your Beeminder goal at one chapter every ten days.
It’s easy to up your Beeminder goal when you find it’s too lenient. It’s really hard to get back into writing after a string of discouragements caused by setting your goals too aggressive.
Even when you manage to meet overambitious goals, you might suffer for it in other ways. I’m not even talking about your social life or general happiness taking a hit (even though those are both very possible). Stretching yourself too thin can make your writing worse!
I had a period where I was Beeminding regularly publishing writing at a rate faster than I was really capable of. I managed to make my goal anyway, but I did it by writing simple, low-risk posts. I shoved aside some of the more complex and rewarding things I was looking forward to writing because I was too stubborn to ease back on my goal. It took me months to realize that I’d messed up and get rid of the over-ambitious goal.
It was only after I dialed everything back and gave myself more space to work that I started producing things I was really proud of again. That period with the overambitious goal stands out as one of the few times since I started writing again where I produced nothing I’m particularly proud of.
Tuning down the publishing goal didn’t even cause me to write less. I didn’t dial back my atomic goals, just my more complicated one, so I was still writing the same amount. When I was ready to begin publishing things I’d written again, I started the goal at a much lower rate. After a few months of consistently exceeding it, I raised the rate.
Here’s what my original goal looked like:
Here’s my new blogging goal:
As you can see, I learned my lesson about over-ambition.
Step 5: Vanquish Guilt
At the same time as you work on Beeminding more complex outputs, you will want to be examining and replacing the guilt based motivation structure you may have built to get there.
Guilt can be a useful motivator to do the bare minimum on a project; guilt (and terror) is largely what got me through university. But guilt is a terrible way to build a long-term habit. If writing is something you do to avoid a creeping guilt, you may start to associate negative feelings with writing; if you started a writing habit because you love writing, then you’re risking that very love if you motivate yourself solely with guilt.
I recommend looking at Beeminder not as a tool to effectively guilt yourself into writing, but as a reminder of what writing is worth to you. You value consistently writing at $X. You know that every time you skip writing for a day or a week, there is a Y% chance that you might lose the habit. Multiply those two together and you get your ideal maximum Beeminder pledge.
It’s entirely rational to choose to derail on Beeminder if you value something else more than you value writing just then Here Beeminder is helping you make this trade-off explicit. You may know that not writing tonight costs you $Z of estimated future utility (this doesn’t necessarily mean future earnings; it could also represent the how much writing is worth to you as an entertainment), but without Beeminder you wouldn’t be facing it head on. When you can directly compare the utility of two ways to spend your time, you can make better decisions and trade-offs.
That said, it rarely comes to mutual exclusion. Often Beeminder prompts me to find a way to write, even if there’s something else I really want to do that partially conflicts. Things that I might lazily view as mutually exclusive often turn out not to be, once there’s money on the line.
It may seem hard to make this leap, especially when you start out with Beeminder. But after two years of regularly Beeminder use, I can honestly say that it doesn’t guilt me into anything. Even when it forces me to write, the emotional tone isn’t quite guilt. Beeminder is an effective goad because it helps me see the causal chain between writing tonight and having a robust writing habit. I write because I’m proud of the amount I write and I want to keep being proud of it. I’m not spurring myself with guilt and using that negativity to move forward. I’m latching onto the pride I want to be able to feel and navigating towards that.
Mere reminders to write are the least of what I get out Beeminder though. Beeminder became so much more effective for me once I started to regularly surpass my goals. Slowly, I began to be motivated mostly by exceeding them and that motivation led me to exceed them by ever greater margins and enjoy every minute of it.
This is the part where everything starts to come together. When you get here, guilt based motivation is but a dim memory. You write because you want to. Beeminder helps keep you on track, but you’re more likely to spend a bit of extra time writing to see the spike in your graphs than you are because you’ll derail otherwise.
When you get to this point (or earlier, depending on how you like to work), something like Complice can really help you make the most of all your motivation. Complice helps you tie your daily actions into the set long- and medium-term goals you’ve set. It has a kickass Beeminder integration that makes Beeminding as easy checking off a box. It has integrated Pomodoro timers for tracking how much time you work (and can send the results to Beeminder). It allows you and a friend to sign up as accountability buddies and see what each other get done . And it shows you how much work you’ve done in the past, allowing you to use the “don’t break the chain” productivity hack if it works for you (it works for me).
As I finish off this piece, I find myself tired and lethargic. It’s not that I particularly want to be writing (although some of the tiredness fell away as soon as I started to type). It’s that writing every night feels like the default option of my life. As weird as it sounds, it feels like it would take me more effort to skip writing than to do it.
This is really good, because any grumpiness about writing I might start with is often gone in under five minutes. The end result of me writing – even on a day when starting was hard – is improved mood for the whole day. I love the sense of accomplishment that creating something brings.
The road here wasn’t exactly easy. It’s taken more than two and a half years, hundreds of thousands of words, incipient carpal tunnel, and many false starts. It’s the false starts that inspired me to write this. I doubt, dear reader, that you are exactly like me. Likely some of this advice won’t work for you. It is, however, my hope that it can point you in the right direction. Perhaps my false starts can save you some of your own.
I would feel deeply uncomfortable giving anyone advice on how to be a better writer; I don’t feel confident enough in my craft for that . But I do feel like I know how to develop a kickass writing habit, the sort of habit that gives you the practice you need to get better. If you too want to write regularly, how about you give this a try?
I think the steps outlined here could be used to help build a variety of durable habits across disciplines. Want to program, cook, draw, or learn a new language? Think that in any of those cases a daily habit would be helpful? This advice is probably transferable to some degree. That said, I haven’t tried to repeat this process for any of those things, so I don’t know what the caveats are or where it will break down. If you adapt this post for anything else, let me know and I’ll link to it here.
Thanks to the kind folks at Beeminder for helping me create some of the graphs used in this post. In addition, thanks are due for fielding my semi-panicked support requests when the graph generation caused some problems with my account.
Thanks to Malcolm Ocean of Complice for pointing me towards Beeminder in the first place and for the year in review post that spurred me to make writing my New Year’s Resolution in 2015.
I genuinely like the people whose products I recommend in this blog post. I genuinely like their creations. They aren’t giving me anything to recommend their stuff.
True story: Beeminder sent out a survey about referral links and I told them they could set up a referral system, but I’d never use it. I think Beeminder and Complice are incredibly valuable tools that are tragically under-used and I don’t want to risk even the appearance of a conflict of interest that might make people less likely to follow my recommendations to use them. For me, they’ve been literally life-changing.
I’ve linked to my specific Beeminder writing goals (there are four of them) at various points throughout this post, but if you want the proof that I’m not talking out of my ass all nicely collected in one place, you can check out my progress towards all of my Beeminder goals at: https://www.beeminder.com/zacharyjacobi.
 If this advice doesn’t work for you, don’t sweat it. I’m just a dude on the internet. This isn’t the bible. What works for me may not work for you and there’s nothing wrong with you if it doesn’t. You’ll just have to find your own way, is all. ^
 If Beeminder doesn’t work for you, I recommend a human accountability buddy (who will check up on your writing progress each day and maybe take your money if you aren’t hitting your goals). ^
 The best advice about writer’s block I’ve ever seen came from Cory Doctorow. He said that some days he feels like he’s inspired and a heavenly chorus is writing for him and other days he feels like he can’t write worth shit and has no clue what’s he’s supposed to be doing. He goes on to say that no matter how strong these feelings are, a month later he can’t tell the which words were written in which state. ^
 I cannot recommend this feature highly enough for people in long-distance relationships. ^