Politics

Westminster is bestminster

[6-minute read]

I’ve been ranting to random people all week about how much I love the Westminster System of parliamentary government (most notably used in Canada, Australia, New Zealand, and the UK) and figured it was about time to write my rant down for broader consumption.

Here’s three reasons why the Westminster System is so much better than the abominable hodgepodge Americans call a government and all the other dysfunctional presidential republics the world over.

1. The head of state and head of government are separate

And more importantly, the head of state is a figurehead.

The president is an odd dual-role, both head of government (and therefore responsible for running the executive branch and implementing the policies of the government) and head of state (the face of the nation at home and abroad; the person who is supposed to serve as a symbol of national unity and moral authority). In Westminster democracies, these roles are split up. The Prime Minister serves as head of government and directs the executive branch, while the Queen (or her representative) serves as head of state [1]. Insofar as the government is personified in anyone, it is personified in a non-partisan person with a circumscribed role.

This is an excellent protection against populism. There is no one person who can gather the mob to them and offer the solutions to all problems, because the office of the head of state is explicitly anti-populist [2]. In Westminster governments, any attempt at crude populism on the part of the prime minister can be countered by messages of national unity from the head of state [3].

It’s also much easier to remove the head of government in the Westminster system. Unlike the president, the prime minister serves only while they have the confidence of parliament and their party. An unpopular prime minister can be easily replaced, as Australia seems happy to demonstrate over and over. A figure like Trump could not be prime minister if their parliamentarians did not like them.

This feature is at risk from open nominating contests and especially rules that don’t allow MPs to pick the interim leader during a leadership race. In this regard, Australia is doing a much better job at exemplifying the virtues of the Westminster system than Canada or the UK (where Corbyn’s vote share is all the more surprising for how much internal strife his election caused) [4].

2. Confidence

To the Commonwealth, one of the most confusing features of American democracy is its (semi-)regular government shut downs, like the one Trump had planned for September [5]. On the other side, Americans are baffled at the seemingly random elections that Commonwealth countries have.

Her Majesty’s Prime Minister governs only so long as they have the confidence of the house. A government is only sworn in after they can prove they have confidence (via a vote of all newly elected and returning MPs). When no party has an absolute majority, things can get tense – or can go right back to the polls. We’ve observed two tense confidence votes this year, one in BC, the other in the UK.

In both these cases, no party had a clear majority of seats in the house (in Canada, we call this a minority government). In both BC and the UK, confidence was secured when a large party enlisted the help of a smaller party to provide “confidence and supply”. In this situation, the small party will vote with the government on budgets and other confidence motions, but is otherwise free to vote however they want.

The first vote of confidence isn’t the only one a government is likely to face. If the opposition thinks the government is doing a poor job, they can launch a vote of no confidence. If the motion is passed by parliament, it is dissolved for an election.

But many bills are actually confidence motions in disguise. Budgets are the “supply” side of “confidence and supply”. Losing a budget vote – sometimes archaically called “failing to secure supply” – results in parliament being dissolved for an election. This is how Ontario’s last election was called. The governing party put forward a budget they were prepared to campaign on and the opposition voted it down.

This feature prevents government shutdowns. If the government can’t agree on a budget, it has to go to the people. If time is of the essence, the Queen or her representative may ask the party that torpedoed the budget to pass a non-partisan continuing funding resolution, good until just after the election to ensure the government continues to function (as happened in Australia in 1975).

By convention, votes on major legislative promises are also motions of confidence. This helps ensure that the priorities laid out during an election campaign don’t get dropped. In a minority government situation, the opposition must decide whether it is worth another election before vetoing any of the government’s key legislative proposals. Because of this, Commonwealth governments can be surprisingly functional even without a legislative majority.

Add all of this together and you get very accountable parties. Try and enact unpopular legislation with anything less than a majority government and you’ll probably find yourself shortly facing voters. On the flip side, obstruct popular legislation and you’ll also find yourself facing voters. Imagine how the last bit of Obama’s term would have been different if the GOP had to fight an election because of the government shutdown.

3. The upper house is totally different

Many Westminster countries have bicameral legislatures, with two chambers making up parliament (New Zealand is the notable exception here). In most Westminster system countries with two chambers, the relationship between the houses is different than that in America.

The two American chambers are essentially co-equal (although the senate gets to approve treaties and budgets must originate in the house). This is not so in the Westminster system. While both chambers have equal powers in many on paper (except that money bills must often originate in the lower chamber), in practice they are very different.

By convention (and occasionally legislation) the upper chamber has its power constrained. The actual restrictions vary from country to country, but in general they forbid rejecting bills for purely partisan reasons or they prevent the upper house from messing with the budget.

The goal of the upper house in the Westminster system is to take a longer view of legislation and protect the nation from short-sighted thinking. This role is more consultative than legislative; it’s not uncommon to see a bill vetoed once, then returned to the upper chamber and assented to (sometimes with token changes, sometimes even with no changes). The upper house isn’t there to ignore the will of the people (as embodied by the lower house), just to remind them to occasionally look longer term.

This sort of system helps prevent legislate gridlock. Since the upper house tends to serve longer terms (in Canada, senators are appointed for life, for example), there is often a different majority in the upper and lower chambers. If the upper chamber was free to veto anything they didn’t like (even if the reasons were purely partisan) then nothing would ever get done.

Taken together, these features of the Westminster system prevent legislative gridlock and produce legitimate outputs of the political process. This obviates populist “I’ll fix everything myself” leaders like Trump, who seem to be an almost inevitable outcome in a perpetually gridlocked and unnavigable system (i.e. the American government).

Insofar as the Westminster system has problems, they are mostly problems of implementation and several Westminster countries have demonstrated that fundamental reform of the system is possible within the system itself. New Zealand abolished the upper house of their parliament when it proved useless. Australia switched to an elected upper house and has come up with a set of constitutional rules that prevent this from causing gridlock (here I’m thinking of the double dissolution election and joint session permitted by Australian law in response to repeated legislative failures).

Among certain people in Canada, electoral and senate reform have become contentious topics. It’s my (unpopular in millennial circles) opinion that Canada has no need of electoral reform. Get a few beers in most proponents of electoral reform and you’ll quickly find that preventing all future Conservative majorities is a much more important goal for them than any abstract concept of “fairness”. I’m not of the opinion that we should change our electoral system just because a party we didn’t like won a majority government once in the last eight elections (or three times in the past ten elections and past fifteen elections).

Senate reform may have already been accomplished, with Prime Minister Trudeau’s move to appoint only non-partisan senators and dissolve the Liberal caucus in the senate. Time will tell if this new system survives his tenure as prime minister.

In one of the articles I linked above, Prof. Joseph Heath compares the utter futility Americans feel about changing their electoral system with the indifference most Canadians feel about changing theirs. In Canada, many proponents of electoral reform specifically wanted to avoid a plebiscite, because they understand that there currently exists no legitimacy crisis sufficient to overcome the status quo bias most people feel. Reform in Canada is certainly possible, but first the system needs to be broken. Right now, the Westminster system is working admirably.

Footnotes

[1] Israel took many cues from Westminster governments. Its president is non-partisan and ceremonial. If Canada was every forced to give up the monarchy, I’d find this sort of presidential system acceptable. ^

[2] It’s hard to tell which is less populist; the oldest representative of one of the few remaining aristocracies, or (like in Israel or the governor-generals of the former colonies), exceptional citizens chosen for their reliability and loyalty to the current political order. ^

[3] See Governor General David Johnston’s criticism of some of Steven Harper’s campaign rhetoric. ^

[4] I’ve of the opinion that Corbyn’s “popularity” is really indicative of PM Teresa May’s unpopularity bolstered by his ability to barely surpass incredibly low expectations. ^

[5] Since rescheduled to December, in light of Hurricane Harvey. ^

Ethics, Literature, Philosophy

Book Review: Utilitarianism for and against (Part 1)

Utilitarianism for and against is an interesting little book. It’s comprised of back-to-back ~70 page essays, one in favour of utilitarianism and one opposed. As an overview, it’s hard to beat something like this. You don’t have to rely on one scholar to give you her (ostensibly fair and balanced) opinion; you get two articulate philosophers arguing their side as best they can. Fair and balanced is by necessity left as an exercise to the reader (honestly, it always is; here at least it’s explicit).

I’m going to cover the “for” side first. The “against” side will be in later blog post. Both reviews are going to assume that you have some understanding of utilitarianism. If you don’t, go read my primer. Or be prepared to Google. I should also mention that I have no aspirations of being balanced myself. I’m a utilitarian; I had much more to disagree with on the “against” side than on the “for” side.

Professor J.J.C Smart makes the arguments in favour of utilitarianism. According to his Wikipedia entry, he was known for “outsmarting” his opponents, that is to say, accepting the conclusions of their reductio ad absurdum arguments with nary a shrug. He was, I’ve gathered, not one for moral intuitions. His criticism of rule utilitarianism played a role in its decline and he was influential in raising the next crop of Australian utilitarians, among whom Peter Singer is counted. As near as I can tell, he was one of the more notable defenders of utilitarianism when this volume was published in 1971 (although much of his essay dates back a decade earlier).

Smart is emphatically not a rationalist (in the philosophical sense); he writes no “proof of utilitarianism” and denies that such a proof is even possible. Instead, Smart restricts himself to explaining how utilitarianism is an attractive ethical system for anyone possessed of general benevolence. Well, I’ll say “everyone”. The authors of this volume seem to be labouring under the delusion that only men have ethical dilemmas or the need for ethical systems. Neither one of them manages the ethicist’s coup of realizing that women might be viewed as full people at the remove of half a century from their time of writing (such a coup would perhaps have been strong evidence of the superiority of one philosophy over another).

A lot of Smart’s essay consists of showing how various different types of utilitarianism are all the same under the hood. I’ve termed these “collapses”, although “isomorphisms” might be a better term. There are six collapses in all.

The very first collapse put me to mind of the famous adage about ducks. If it walks like a duck, swims like a duck, and quacks like a duck, it is a duck. By the same token, if someone acts exactly how a utilitarian in their position and with their information would act, then it doesn’t matter if they are a utilitarian or not. From the point of view of an ethical system that cares only about consequences they may as well be.

The next collapse deals with rule utilitarianism and may have a lot to do with its philosophical collapse. Smart points out that if you are avoiding “rule worship”, then you will face a quandary when you could break a rule in such a way as to gain more utility. Rule utilitarians sometimes claim that you just need rules with lots of exceptions and special cases. Smart points out that if you carry this through to its logical conclusion, you really are only left with one rule, the meta-rule of “maximize expected utility”. In this way, rule utilitarianism collapses into act utilitarianism.

Next into the compactor is the difference between ideal and hedonic utilitarians. Briefly, ideal utilitarians hold that some states of mind are inherently valuable (in a utilitarian sense), even if they aren’t particularly pleasant from the inside. “Better Socrates dissatisfied than a fool satisfied” is the rallying cry of ideal utilitarians. Hedonic utilitarians have no terminal values beyond happiness; they would gladly let almost the entirety of the human race wirehead.

Smart claims that while these differences are philosophically large, they are practically much less meaningful. Here Smart introduces the idea of the fecundity of a pleasure. A doctor taking joy (or grim satisfaction) in saving a life is a much more fecund pleasure than a gambler’s excitement at a good throw, because it brings about greater joy once you take into account everyone around the actor. Many of the other pleasures (like writing or other intellectual pursuits) that ideal utilitarians value are similarly fecund. They either lead to abatement of suffering (the intellectual pursuits of scientists) or to many people’s pleasure (the labour of the poet). Taking into account fecundity, it was better for Smart to write this essay than to wirehead himself, because many other people – like me – get to enjoy his writing and have fun thinking over the thorny issues he raises.

Smart could have stood to examine at greater length just why ideal utilitarians value the things they do. I think there’s a decent case to be made that societies figure out ways to value certain (likely fecund) pleasures all on their own, no philosophers required. It is not, I think, that ideal utilitarians have stumbled onto certain higher pleasures that they should coax their societies into valuing. Instead, their societies have inculcated them with a set of valued activities, which, due to cultural evolution, happen to line up well with fecund pleasures. This is why it feels difficult to argue with the list of pleasures ideal utilitarians proffer; it’s not that they’ve stumbled onto deep philosophical truths via reason alone, it’s that we have the same inculcations they do.

Beyond simple fecundity though, there is the fact that the choice between Socrates dissatisfied and a fool satisfied rarely comes up. Smart has a great line about this:

But even the most avid television addict probably enjoys solving practical problems connected with his car, his furniture, or his garden. However unintellectual he might be, he would certainly resist the suggestion the he should, if it were possible, change places with a contented sheep, or even a happy and lively dog.

This boils down to: ‘ideal utilitarians assume they’re a lot better than everyone else, what with their “philosophical pursuits”, but most people don’t want purely mindless pleasures’. Combined, these ideas of fecundity and hidden depths, point to a vanishingly small gap between ideal and hedonistic utilitarians, especially compared to the gap between utilitarians and practitioners of other ethical systems.

After dealing with questions of how highly we should weigh some pleasures, Smart turns to address the idea of some pleasures not counting at all. Take, for example, the pleasure that a sadist takes in torturing a victim. Should we count this pleasure in our utilitarian moral calculus? Smart says yes, for reasons that again boil down to “certain pleasures being viewed as bad are an artifact of culture; no pleasure is intrinsically bad.

(Note however that this isn’t the same thing as Smart condoning the torture. He would say that the torture is wrong because the pleasure the sadist gains from it cannot make up for the distress of the victim. Given that no one has ever found a real live utility monster, this seems a safe position to take.)

In service of this, Smart presents a thought experiment. Imagine a barren universe inhabited by a single sentient being. This sentient being wrongly believes that there are many other inhabitants of the universe being gruesomely tortured and takes great pleasure in this thought. Would the universe be better if the being didn’t derive pleasure from her misapprehension?

The answer here for both Smart and me is no (although I suspect many might disagree with us). Smart reasons (almost tautologically) that since there is no one for this being to hurt, her predilection for torture can’t hurt anyone. We are rightfully wary of people who unselfconsciously enjoy the thought of innocents being tortured because of what it says about what their hobbies might be. But if they cannot hurt anyone, their obsession is literally harmless. This bleak world would not be better served by its single sentient inhabitant quailing at the thought of the imaginary torture.

Of course, there’s a wide gap between the inhabitant curled up in a ball mourning the torture she wrongly believes to be ongoing and her simple ambivalence to it. It seems plausible that many people could consider her ambivalence preferable, even if they did not wish her to be sad. But imagine then the difference being between her lonely and bored and her satisfied and happy (leaving aside for a moment the torture). It is clear here which is the better universe. Given a way to move from the universe with a single bored being to the one with a single fulfilled being, shouldn’t we take it, given that the shift most literally harms no one?

This brings us to the distinction between intrinsically bad pleasures and extrinsically bad pleasures – the flip side of the intrinsically more valuable states of mind of the ideal utilitarian. Intrinsically bad pleasures are pleasures that for some rationalist or metaphysical reason are just wrong. Their rightness or wrongness must of course be vulnerable to attacks on the underlying logic or theology, but I can hardly embark on a survey of common objections to all the common underpinnings; I haven’t the time. But many people have undertaken those critiques and many will in the future, making a belief in intrinsically bad pleasures a most unstable place to stand.

Extrinsically bad pleasures seem like a much safer proposition (and much more convenient to the utilitarian who wishes to keep their ethical system free of meta-physical or meta-ethical baggage). To say that a pleasure is extrinsically bad is simply to say that to enjoy it causes so much misery that it will practically never be moral to experience it. Similar to how I described ideal utilitarian values as heavily culturally influenced, I can’t help but feel that seeing some pleasures as intrinsically bad has to be the result of some cultural conditioning.

If we can accept that certain pleasures are not intrinsically good or ill, but that many pleasures that are thought of as intrinsically good or ill are thought so because of long cultural experience – positive or negative – with the consequences of seeking them out, then we should see the position of utilitarians who believe that some pleasures cannot be counted in the plus column collapse to approximately the same as those who hold that they can, even if neither accepts the position of the other. The utilitarian who refuses to believe in intrinsically bad pleasures should still condemn most of the same actions as one who does, because she knows that these pleasures will be outweighed by the pains they inflict on others (like the pain of the torture victim overwhelming the joy of the torturer).

There is a further advantage to holding that pleasures cannot be intrinsically wrong. If we accept the post-modernists adage that knowledge is created culturally, we will remember to be skeptical of the universality of our knowledge. That is to say, if you hold a list of intrinsically bad pleasures, it will probably not be an exhaustive list and there may be pleasures whose ill-effects you overlook because you are culturally conditioned to overlook them. A more thoughtful utilitarian who doesn’t take the short-cut of deeming some pleasures intrinsically bad can catch these consequences and correctly advocate against these ultimately wrong actions.

The penultimate collapse is perhaps the least well supported by arguments. In a scant page, Smart addresses the differences between total and average happiness in a most unsatisfactory fashion. He asks which of two universes you might prefer: one with one million happy, healthy people, or one with twice as many people, equally happy and healthy. Both Smart and I feel drawn to the larger universe, but he has no arguments for people who prefer the smaller. Smart skips over the difficulties here with an airy statement of “often the best way to increase the average happiness is to increase the total happiness and vice versa”.

I’m not entirely sure this statement is true. How would one go about proving it?

Certainly, average happiness seems to miss out on the (to me) obvious good that you’d get if you could have twice as many happy people (which is clearly one case where they give different answers), but like Smart, I have trouble coming up with a persuasive argument why that is obviously good.

I do have one important thing myself to say about the difference between average and total happiness. When I imagine a world with more people who are on average less happy than the people that currently exist (but collectively experience a greater total happiness) I feel an internal flinch.

Unfortunately for my moral intuitions, I feel the exact same flinch when I image a world with many fewer people, who are on average transcendentally happy. We can fiddle with the math to make this scenario come out to have greater average and total happiness than the current world. Doesn’t matter. Exact same flinch.

This leads me to believe that my moral intuitions have a strong status quo bias. The presence of a status quo bias in itself isn’t an argument for either total or average utilitarianism, but it is a reminder to be intensely skeptical of our response to thought experiments that involve changing the status quo and even to be wary of the order that options are presented in.

The final collapse Smart introduces is that between regular utilitarians and negative utilitarians. Negative utilitarians believe that only suffering is morally relevant and that the most important moral actions are those that have the consequence of reducing suffering. Smart points out that you can raise both the total and average happiness of a population by reducing suffering and furthermore that there is widespread agreement on what reduces suffering. So Smart expects utilitarians of all kinds (including negative) to primarily focus on reducing suffering anyway. Basically, despite the profound philosophical differences between regular and negative utilitarians, we should expect them to behave equivalently. Which, by the very first collapse (if it walks like a duck…), shows that we can treat them as philosophical equivalents, at least in the present world.

In my experience, this is more or less true. Many of the negative utilitarians I am aware of mainly exercise their ethics by donating 10% of their income to GiveWell’s most effective charities. The regular utilitarians… do the exact same. Quack.

As far as I can tell, Smart goes to all this work to show how many forms of utilitarianism collapse together so that he can present a system that isn’t at war with itself. Being able to portray utilitarianism as a simple, unified system (despite the many ways of doing it) heads off many simple criticisms.

While I doubt many people avoided utilitarianism because there are lingering questions about total versus average happiness, per se, these little things add up. Saying “yes, there are a bunch of little implementation details that aren’t agreed upon” is a bad start to an ethical system, unless you can immediately follow it up with “but here’s fifty pages of why that doesn’t matter and you can just do what comes naturally to you (under the aegis of utilitarianism)”.

Let’s talk a bit about what comes naturally to people outside the context of different forms of utilitarianism. No one, not even Smart, sits down and does utilitarian calculus before making every little decision. For most tasks, we can ignore the ethical considerations (e.g. there is broad, although probably not universal agreement that there aren’t hidden moral dimensions to opening a door). For some others, our instincts are good enough. Should you thank the woman at the grocery store checkout? You probably will automatically, without pausing to consider if it will increase the total (or average) happiness of the world.

Like in the case of thanking random service industry workers, there are a variety of cases where we actually have pretty good rules of thumb. These rules of thumbs serve two purposes. First, they allow us to avoid spending all of our time contemplating if our actions are right or wrong, freeing us to actually act. Second, they protect us from doing bad things out of pettiness or venality. If you have a strong rule of thumb that violence is an inappropriate response to speech you disagree with, you’re less likely to talk yourself into punching an odious speaker in the face when confronted with them.

It’s obviously important to pick the right heuristics. You want to pick the ones that most often lead towards the right outcomes.

I say “heuristics” and “rules of thumbs” because the thing about utilitarians and rules is that they always have to be prepared to break them. Rules exist for the common cases. Utilitarians have to be on guard for the uncommon cases, the ones where breaking a rule leads to greater good overall. Having a “don’t cause people to die” rule is all well and good. But you need to be prepared to break it if you can only stop mass death from a runaway trolley by pushing an appropriately sized person in front of it.

Smart seems to think that utilitarianism only comes up for deliberative actions, where you take the time to think about them and that it shouldn’t necessarily cover your habits. This seems like an abrogation to me. Shouldn’t a clever utilitarian, realizing that she only uses utilitarianism for big decisions spend some time training her reflexes to more often give the correct utilitarian solution, while also training herself to be more careful of her rules of thumb and think ethically more often? Smart gave no indication that he thinks this is the case.

The discussion of rules gives Smart the opportunity to introduce a utilitarian vocabulary. An action is right if it is the one that maximizes expected happiness (crucially, this is a summation across many probabilities and isn’t necessarily the action that will maximize the chance of the happiest outcome) and wrong otherwise. An action is rational if a logical being in possession of all the information you possess would think you to be right if you did it. All other actions are irrational. A rule of thumb, disposition, or action is good if it tends to lead to the right outcomes and bad if it tends to lead to the wrong ones.

This vocabulary becomes important when Smart talks about praise, which he believes is an important utilitarian concern in its own right. Praise increases people’s propensity towards certain actions or dispositions, so Smart believes a utilitarian aught to consider if the world would be better served by more of the same before she praises anything. This leads to Smart suggesting that utilitarians should praise actions that are good or rational even if they aren’t right.

It also implies that utilitarians doing the right thing must be open to criticism if it requires bad actions. One example Smart gives is a utilitarian Frenchman cheating on wartime rationing in 1940s England. The Frenchman knows that the Brits are too patriotic to cheat, so his action (and the actions of the few others that cheat) will probably fall below the threshold for causing any real harm, while making him (and the other cheaters) happier. The calculus comes out positive and the Frenchman believes it to be the right action. Smart acknowledges that this logic is correct, but he points out that by the similar logic, the Frenchman should agree that he must be severely punished if caught, so as to discourage others from doing the same thing.

This actually reminds me of something Hannah Arendt brushed up against in Eichmann in Jerusalem while talking about how the moral constraints on people are different than the ones on states. She gives the example of Soghomon Tehlirian, the Armenian exile who assassinated one of the triumvirate of Turkish generals responsible for the Armenian genocide. Arendt believes that it would have been wrong for the Armenian government to assassinate the general (had one even existed at the time), but that it was right for a private citizen to do the deed, especially given that Tehlirian did not seek to hide his crimes or resist arrest.

From a utilitarian point of view, the argument would go something like this: political assassinations are bad, in that they tend to cause upheaval, chaos, and ultimately suffering. On the other hand, there are some leaders who the world would clearly be better off without, if not to stop their ill deeds in their tracks, then to strike fear and moderation into the hearts of similar leaders.

Were the government of any country to carry out these assassinations, it would undermine the government’s ability to police murder. But when a private individual does the deed and then immediately gives herself up into the waiting arms of justice, the utility of the world is increased. If she has erred in picking her target and no one finds the assassination justified, then she will be promptly punished, disincentivizing copy-cats. If instead, like Tehlirian, she is found not guilty, it will only be because the crimes committed by the leader she assassinated were so brutal and clear that no reasonable person could countenance them. This too sends a signal.

That said, I think Smart takes his distinctions between right and good a bit too far. He cautions against trying to change the non-utilitarian morality of anyone who already tends towards good actions, because this might fail half-way, weakening their morality without instilling a new one. Likewise, he is skeptical of any attempt to change the traditions of a society.

This feels too much like trying to have your cake and eat it too. Utilitarianism can be criticized because it is an evangelical ethical system that gives results far from moral intuitions in some cases. From a utilitarian point of view, it is fairly clearly good to have more utilitarians willing to hoover up these counter-intuitive sources of utility. If all you care about are the ends, you want more people to care about the best ends!

If the best way to achieve utilitarian ends wasn’t through utilitarianism, then we’re left with a self-defeating moral system. In trying to defend utilitarianism from the weak critique that it is pushy and evangelical, both in ways that are repugnant to all who engage in cultural or individual ethical relativism and in ways that are repugnant to some moral intuitions, Smart opens it up to the much stronger critique that it is incoherent!

Smart by turns seems to seek to rescue some commonly held moral truths when they conflict with utilitarianism while rejecting others that seem no less contradictory. I can hardly say that he seems keen to show utilitarianism is in fact in harmony with how people normally act – he clearly isn’t. But he also doesn’t always go all (or even part of) the way in choosing utilitarianism over moral intuitions

Near the end of the book, when talking about a thought experiment introduced by one McCloskey, Smart admits that the only utilitarian action is to frame and execute an innocent man, thereby preventing a riot. McCloskey anticipated him, saying: “But as far as I know, only J.J.C. Smart among the contemporary utilitarians is happy to adopt this ‘solution'”.

Smart responds:

Here I must lodge a mild protest. McCloskey’s use of the work ‘happy’ surely makes me look a most reprehensible person. Even in my most utilitarian moods, I am not happy about this consequence of utilitarianism… since any injustice causes misery and so can be justified only as the lesser of two evils, the fewer the situation in which the utilitarian is forced to choose the lesser of two evils, the better he will be pleased.

This is also the man who said (much as I have) that “admittedly utilitarianism does have consequences which are incompatible with the common moral consciousness, but I tended to take the view ‘so much the worse for the common moral consciousness’.”

All this leaves me baffled. Why the strange mixture? Sometimes Smart goes far further than it seems any of his contemporaries would have. Other times, he stops short of what seems to me the truly utilitarian solution.

On the criticism that utilitarianism compels us always in moral action, leaving us no time to relax, he offers two responses. The first is that perhaps people are too unwilling to act and would be better served by being more spurred on. The second is that it may be that relaxing today allows us to do ten times the good tomorrow.

(Personally, I expect the answer is both. Many people could do more than they currently do, while many others risk burnout unless they relax more. There is a reason the law of equal and opposite advice exists. Different people need to hear different things.)

But take this and his support for rules of thumb on one side and his support for executing the innocent man, or long spiel on how a bunch of people wireheading wouldn’t be that bad (a spiel that convinced me, I might add) and I’m left with an unclear overall picture. As an all-is-fine defence of utilitarianism, it doesn’t go far enough. As a bracing lecture about our degenerate non-utilitarian ways, it also doesn’t go far enough.

Leaving, I suppose, the sincere views of a man who pondered utilitarianism for much longer than I have. Chance is the only reason that makes sense. This would imply that sometimes Smart gives a nod to traditional morality because he’s decided it aligns with his utilitarian ethics. Other times, he disagrees. At length. Maybe Smart is a man seeking to rescue what precious moral truths he can from the house fire that is utilitarianism.

Perhaps some of my confusion comes from another confusion, one that seems to have subtly infected many utilitarians. Smart is careful to point out that the atomic belief underlying utilitarianism is general benevolence. Benevolence, note, is not altruism. The individual utilitarian matters just as much – or as little – as everyone else. Utilitarians in Smart’s framework have no obligation to run themselves ragged for another. Trading your happiness for another’s will only ever be an ethically neutral act to the utilitarian.

Or, I suspect, the wrong one. You are best placed to know yourself and best placed to create happiness for yourself. It makes sense to include some sort of bias towards your own happiness to take this into account. Or, if this feels icky to you, you could handle it at the level of probabilities. You are more likely to make yourself happy than someone else (assuming you’ve put some effort towards understanding what makes you happy). If you are 80% likely to make yourself happy for an evening and 60% likely to make someone else happy, your clear utilitarian duty is to yourself.

This is not a suggestion to go become a hermit. Social interactions are very rarely as zero sum as all that. It might be that the best way to make yourself happy is to go help a friend. Or to go to a party with several people you know. But I have seen people risk burnout (and have risked it myself) by assuming it is wrong to take any time for themselves when they have friends in need.

This is all my own thoughts, not Smart’s. For all of his talk of utilitarianism, he offers little advice on how to make it a practically useful system. All too often, Smart retreats to the idea of measuring the total utility of a society or world. This presents a host of problems and begs two important questions.

First, can utility be accurately quantified? Smart tries to show that different ways of measuring utility should be roughly equivalent in qualitative terms, but it is unclear if this follows at a quantitative level. Stability analysis (where you see how sensitive your result is to different starting assumptions) is an important tool for checking the veracity of conclusions in engineering projects. I have a hunch that quantitatively, utilitarian results to many problems will be highly unstable when a variety of forms of utilitarianism are tried.

Second, how should we deal with utility in the future? Smart claims that beyond a certain point we can ignore side effects (as unintended good side effects should cancel out unintended ill side effects; this is especially important when it comes to things like saving lives) but that doesn’t give us any advice on how we can estimate effects.

We are perhaps saved here by the same collapse that aligned normal utilitarians with negative utilitarians. If we cannot quantify joy, we can sure quantify misery. Doctors can tell you just how much quality of life a disease can sap (there are tables for this), not to mention the chances that a disease might end a life outright. We know the rates of absolute poverty, maternal deaths, and malaria prevalence. There is more than enough misery in the world to go around and certainly utilitarians who focus on ending misery do not seem to be at risk of being out an ethical duty any time in the near future.

(If ending misery is important to you, might I suggest donating a fraction of your monthly income to one of GiveWell’s top recommended charities? These are the charities that most effectively use money to reduce suffering. If you care about maximizing your impact, GiveWell is a good way to do it.)

Although speaking of the future, I find it striking how little utilitarianism has changed in the fifty-six years since Smart first wrote his essay. He pauses to comment on the risk of a recursively self-improving AI and talk about the potential future moral battles over factory farming. I’m part of a utilitarian meme group and these are the same topics people joke about every day. It is unclear if these are topics that utilitarianism predisposes people to care about, or if there was some indirect cultural transmission of these concerns over the intervening years.

There are many more gems – and frustrations in Smart’s essay. I can’t cover them all without writing a pale imitation of his words, so I shan’t try any more. As an introduction to the different types of utilitarianism, this essay was better than any other introduction I’ve read, especially because it shows all of the ways that various utilitarian systems fit together.

As a defense of utilitarianism, it is comprehensive and pragmatic. It doesn’t seek to please everyone and doesn’t seek to prove utilitarianism. It lays out the advantages of utilitarianism clearly, in plain language, and shows how the disadvantages are not as great as might be imagined. I can see it being persuasive to anyone considering utilitarianism, although in this it is hampered by its position as the first essay in the collection. Anyone convinced by it must then read through another seventy pages of arguments against utilitarianism, which will perhaps leave them rather less convinced.

As a work of academic philosophy, it’s interesting. There’s almost no meta-ethics or meta-physics here. This is a defense written entirely on its own, without recourse to underlying frameworks that might be separately undermined. Smart’s insistence on laying out his arguments plainly leaves him little room to retreat (except around average vs. total happiness). I’ve always found this a useful type of writing; even when I don’t agree, the ways that I disagree with clearly articulated theses can be illuminating.

It’s a pleasant read. I’ve had mostly good luck reading academic philosophy. This book wasn’t a struggle to wade through and it contained the occasional amusing turn of phrase. Smart is neither dry lecturer nor frothing polemicizer. One is put almost in the mind of a kindly uncle, patiently explaining his way through a complex, but not needlessly complicated subject. I highly recommend reading it and its companion.

Model, Physics, Science

Understanding Radiation via Antennas

It can be hard to grasp that radio waves, deadly radiation, and the light we can see are all the same thing. How can electromagnetic (EM) radiation – photons – sometimes penetrate walls and sometimes not? How can some forms of EM radiation be perfectly safe and others damage our DNA? How can radio waves travel so much further than gamma rays in air, but no further through concrete?

It all comes down to wavelength. But before we get into that, we should at least take a glance at what EM radiation really is.

Electromagnetic radiation takes the form of two orthogonal waves. In one direction, you have an oscillating magnetic field. In the other, an oscillating electric field. Both of these fields are orthogonal to the direction of travel.

These oscillations take a certain amount of time to complete, a time which is calculated by observing the peak value of one of the fields and then measuring how long it takes for the field to return to that value. Luckily, we only need to do this once, because the time an oscillation takes (called the period) will stay the same unless acted on by something external. You can invert the period to get the frequency – the number of times oscillations occur in a second. Frequency uses the unit Hertz, which are just inverted seconds. If something has the frequency 60Hz, it happens 60 times per seconds.

EM radiation has another nifty property: it always travels at the same speed, a speed commonly called “the speed of light” [1] (even when applied to EM radiation that isn’t light). When you know the speed of an oscillating wave and the amount of time it takes for the wave to oscillate, you can calculate the wavelength. Scientists like to do this because the wavelength gives us a lot of information about how radiation will interact with world. It is common practice to represent wavelength with the Greek letter Lambda (λ).

lambda class shuttle from star wars
Not that type of lambda. Image Credit: Marshal Banana on Flickr

Put in a more mathy way: if you have an event that occurs with frequency f to something travelling at velocity v, the event will have a spatial periodicity λ (our trusty wavelength) equal to v / f. For example, if you have a sound that oscillates 34Hz (this frequency is equivalent to the lowest C♯ on a standard piano) travelling at 340m/s (the speed of sound in air), it will have a wavelength of (340 m/s)/(34 s-1) = 10m. I’m using sound here so we can use reasonably sized numbers, but the results are equally applicable to light or other forms of EM radiation.

Wavelength and frequency are inversely related to each other. The higher the frequency of something, the smaller its wavelength. The longer the wavelength, the lower the frequency. I’m used to people describing EM radiation in terms of frequency when they’re talking about energy (the quicker something is vibrating, the more energy it has) and wavelength when talking about what it will interact with (the subject of the rest of this post).

With all that background out of the way, we can actually “look” at electromagnetic radiation and understand what we’re seeing.

animated gif showing oscillating magnetic and electric fields orthogonal to direction of travel
Here wavelength is labeled with “λ”, the electric field is red and labelled with “E” and the magnetic field is blue and labelled with “B”. “B” is the standard symbol for magnetic fields, for reasons I have never understood. Image Credit: Lookang on Wikimedia Commons.

Wavelength is very important. You know those big TV antennas houses used to have?

picture of house with old fashioned aerial antenna
Image Credit: B137 on Wikimedia Commons

Turns out that they’re about the same size as the wavelength of television signals. The antenna on a car? About the same size as the radio waves it picks up. Those big radio telescopes in the desert? Same size as the extrasolar radio waves they hope to pick up.

image of the VLA radio telescopes
Fun fact: these dishes together make up a very large radio telescope, unimaginatively called the “Very Large Array”. Image Credit: Hajor on Wikimedia Commons

Even things we don’t normally think of as antennas can act like them. The rod and cone cells in your eyes act as antennas for the light of this very blog post [2]. Chains of protein or water molecules act as antennas for microwave radiation, often with delicious results. The bases in your DNA act as antennas for UV light, often with disastrous results.

These are just a few examples, not an exhaustive list. For something to be able to interact with EM radiation, you just need an appropriately sized system of electrons (or electrical system; the two terms imply each other). You get this system of electrons more or less for free with metal. In a metal, all of the electrons are delocalized, making the whole length of a metal object one big electrical system. This is why the antennas in our phones or on our houses are made of metal. It isn’t just metal that can have this property though. Organic substances can have appropriately sized systems of delocalized electrons via double bonding [3].

EM radiation can’t really interact with things that aren’t the same size as its wavelength. Interaction with EM radiation takes the form of the electric or magnetic field of a photon altering the electric or magnetic field of the substance being interacted with. This happens much more readily when the fields are approximately similar sizes. When fields are the same size, you get an opportunity for resonance, which dramatically decreases the loss in the interaction. Losses for dissimilar sized electric fields are so high that you can assume (as a first approximation) that they don’t really interact.

In practical terms, this means that a long metal rod might heat up if exposed to a lot of radio waves (wavelengths for radio waves vary from 1mm to 100km; many are a few metres long due to the ease of making antennas in that size) because it has a single electrical system that is the right size to absorb energy from the radio waves. A similarly sized person will not heat up, because there is no single part of them that is a unified electrical system the same size as the radio waves.

Microwaves (wavelengths appropriately micron-sized) might heat up your food, but they won’t damage your DNA (nanometres in width). They’re much larger than individual DNA molecules. Microwaves are no more capable of interacting with your DNA than a giant would be of picking up a single grain of rice. Microwaves can hurt cells or tissues, but they’re incapable of hurting your DNA and leaving the rest of the cell intact. They’re just too big. Because of this, there is no cancer risk from microwave exposure (whatever paranoid hippies might say).

Gamma rays do present a cancer risk. They have a wavelength (about 10 picometres) that is similar in size to electrons. This means that they can be absorbed by the electrons in your DNA, which kick these electrons out of their homes, leading to chemical reactions that change your DNA and can ultimately lead to cancer.

Wavelength explains how gamma rays can penetrate concrete (they’re actually so small that they miss most of the mass of concrete and only occasionally hit electrons and stop) and how radio waves penetrate concrete (they’re so large that you need a large amount of concrete before they’re able to interact with it and be stopped [4]). Gamma rays are stopped by the air because air contains electrons (albeit sparsely) that they can hit and be stopped by. Radio waves are much too large for this to be a possibility.

When you’re worried about a certain type of EM radiation causing cancer, all you have to do is look at its wavelength. Any wavelength smaller than that of ultraviolet light (about 400nm) is small enough to interact with DNA in a meaningful way. Anything large is unable to really interact with DNA and is therefore safe.

Epistemic Status: Model. Looking at everything as antenna will help you understand why EM radiation interacts with the physical world the way it does, but there is a lot of hidden complexity here. For example, eyes are far from directly analogous to antennas in their mechanism of action, even if they are sized appropriately to be antennas for light. It’s also true that at the extreme ends of photon energy, interactions are based more on energy than on size. I’ve omitted this in order to write something that isn’t entirely caveats, but be aware that it occurs.

Footnotes:

[1] You may have heard that the speed of light changes in different substances. Tables will tell you that the speed of light in water is only about ¾ of the speed of light in air or vacuum and that the speed of light in glass is even slower still. This isn’t technically true. The speed of light is (as far as we know) cosmically invariant – light travels the same speed everywhere in the galaxy. That said, the amount of time light takes to travel between two points can vary based on how many collisions and redirections it is likely to get into between two points. It’s the difference between how long it takes for a pinball to make its way across a pinball table when it hits nothing and how long it takes when it hits every single bumper and obstacle. ^

[2] This is a first approximation of what is going on. Eyes can be modelled as antennas for the right wavelength of EM radiation, but this ignores a whole lot of chemistry and biophysics. ^

[3] The smaller the wavelength, the easier it is to find an appropriately sized system of electrons. When your wavelength is the size of a double bond (0.133nm), you’ll be able to interact with anything that has a double bond. Even smaller wavelengths have even more options for interactions – a wavelength that is well sized for an electron will interact with anything that has an electron (approximately everything). ^

[4] This interaction is actually governed by quantum mechanical tunneling. Whenever a form of EM radiation “tries” to cross a barrier larger than its wavelength, it will be attenuated by the barrier. The equation that describes the probability distribution of a particle (the photons that make up EM radiation are both waves and particles, so we can use particle equations for them) is approximately  (I say approximately because I’ve simplified all the constants into a single term, k), which becomes  (here I’m using k1 to imply that the constant will be different), the equation for exponential decay, when the energy (to a first approximation, length) of the substance is higher than the energy (read size of wavelength) of the light.

This equation shows that there can be some probability – occasionally even a high probability – of the particle existing on the other side of a barrier.  All you need for a particle to traverse a barrier is an appropriately small barrier. ^

Ethics, Philosophy

Utilitarianism: An Overview

What is a utilitarian?

To answer that question, you have to think about another, namely: “what makes an action right?”

Is it the outcome? The intent? What is a good intent or a good outcome?

Kantian deontologists have pithy slogans like: ” I ought never to act except in such a way that I could also will that my maxim should become a universal law” or “an action is morally right if done for duty and in accordance to duty.

Virtue ethicists have a rich philosophical tradition that dates back (in Western philosophy) to Plato and Aristotle.

And utilitarians have math.

Utilitarianism is a subset of consequentialism. Consequentialism is the belief that only the effects of an action matter. This belief lends itself equally well to selfish and universal ethical systems.

When choosing between two actions, selfish consequentialist (philosophers and ethicists would call such a person an egoist) would say that the morally superior action is the one that brings them the most happiness.

Utilitarians would say that the morally superior option is the one that brings the most ______ to the world/universe/multiverse, where ______ is whatever measure of goodness they’ve chosen. The fact that the world/universe/multiverse is the object of optimization is where the math comes in. It’s often pretty hard to add up any measure of goodness over a set as large as a world/universe/multiverse.

It’s also hard to define goodness in abstract without lapsing into tautology (“how does it represent goodness?” – “well it’s obvious, it’s the best thing!”). Instead of looking at in abstract, it’s helpful to look at utilitarian systems in action.

What quality people choose as their ethical barometer/best measure of the goodness of the world tells you a lot about what they value. Here’s four common ones. As you read them, consider both what implicit values they encode and which ones call out to you.

QALY Utilitarianism

QALY Utilitarianism is most commonly seen in discussions around medical ethics, where QALYs are frequently used to determine the optimal allocation of resources. One QALY represents one year of reasonably healthy and happy life. Any conditions which reduce someone’s enjoyment of life results in those years so blighted being weighed as less than one full QALY.

For example, a year living with asthma is worth 0.9 QALYs. A year with severe seizures is worth 0.7 QALYs.

Let’s say we have a treatment for asthma that cost $1000 and another for epilepsy that costs $1000. If we only have $1000, we should treat the epilepsy (this leads to an increase of 0.3 QALYs, more than the 0.1 QALYs we’d get for treating asthma).

If we have more money, we should treat epilepsy until we run out of epileptic patients, then use the remaining money for asthma.

Things become more complicated if the treatments cost different amounts of money. If it is only $100 to treat asthma, then we should instead prioritize treating asthma, because $1000 of treatment buys us 1 QALY, instead of only 0.3.

Note that QALY utilitarianism (and utilitarianism in general) doesn’t tell us what is right per se. It only gives us a relative ranking of actions. One of those actions may produce the most utility. But that doesn’t necessarily mean that the only right thing to do is constantly pursue the actions that produce the very most utility.

QALY utilitarianism remains most useful in medical science, where researchers have spent a lot of time figuring out the QALY values for many potential conditions. Used with a set of accurate QALY tables, it becomes a powerful way to ensure cost effectiveness in healthcare. QALY utilitarianism is less useful when we lack these tables and therefore remains sparsely used for non-healthcare related decisions.

Hedonistic Utilitarianism

Hedonistic utilitarianism is much more general than QALY utilitarianism, in part because its value function is relatively easy to calculate.

It is almost a tautology to claim that people wish to seek out pleasure and avoid pain. If we see someone happy about an activity we think of us painful, it’s much more likely that we’re incorrectly assessing how pleasurable/painful they find it than it is that they also find the activity painful.

Given how common pleasure-seeking/pain-avoiding is, it’s unsurprising that pleasure has been associated with The [moral] Good and pain with The [moral] Bad at least since the time of Plato and Socrates.

It’s also unsurprising that pleasure and pain can form the basis of utilitarian value functions. This is Hedonistic Utilitarianism and it judges actions based on the amount of net pleasure they cause across all people.

Weighing net pleasure across all people gives us some wiggle room. Repeatedly taking heroin is apparently really, really pleasurable. But it may lead to less pleasure overall if you quickly die from a heroin overdose, leaving behind a bereaved family and preventing all the other pleasure you could have had in your life.

So the hedonistic utilitarianism value function probably doesn’t assign the highest rating to getting everyone in the world blissed out on the most powerful drugs available.

But even ignoring constant drug use, or other descents into purely hedonistic pleasures, hedonistic utilitarianism often frustrates people who hold a higher value on actions that may produce less direct pleasure, but lead to them feeling more satisfied and contented overall. These people are left with two options: they can argue for ever more complicated definitions of pleasure and pain, taking into account the hedonic treadmill and hedonistic paradox, or they can pick another value function.

Preference Utilitarianism

Preference utilitarianism is simple on the surface. Its value function is supposed to track how closely people’s preferences are fulfilled. But there are three big problems with this simple framing.

First, which preferences? I may have the avowed preference to study for a test tomorrow, but once I sit down to study my preference may be revealed to be procrastinating all night. Which preference is more important? Some preference utilitarians say that the true preference is the action you’d pick in hindsight if you were perfectly rational. Others drop the “truly rational” part, but still talk about preferences in terms of what you’d most want in hindsight. Another camp gives credence to the highest level preference over all the others. If I prefer in the moment to procrastinate but would prefer to prefer to want to study, then the meta-preference is the one that counts. And yet another group of people give the most weighting to revealed preferences ­– what you’d actually do in the situation.

It’s basically a personal judgement call as to which of these groups you fall into, a decision which your own interactions with your preferences will heavily shape.

The second problem is even thornier. What do we do when preferences collide? Say my friend and I go out to a restaurant. She may prefer that we each pay for our own meals. I may prefer that she pays for both of our meals. There is no way to satisfy both of our preferences at the same time. Is the most moral outcome assuaging whomever holds their preferences the most strongly? Won’t that just incentivize everyone to hold their preferences as strongly as humanly possible and never cooperate? If enough people hold a preference that a person or a group of people should die, does it provide more utility to kill them than to let them continue living?

One more problem: what do we do with beings that cannot hold preferences? Animals, small children, foetuses, and people in vegetative states are commonly cited as holding no preferences. Does this mean that others may do whatever they want with them? Does it always produce more utility for me to kill any animal I desire to kill, given it has no preferences to balance mine?

All of these questions remain inconclusively answered, leaving each preference utilitarian to decide for herself where she stands on them.

Rule Utilitarianism

The three previous forms of utilitarianism are broadly grouped together (along with many others) under act utilitarianism. But there is another way and a whole other class of value functions. Meet rule utilitarianism.

Rule utilitarians do not compare actions and outcomes directly when calculating utility. Instead they come up with a general set of rules which they believe promotes the most utility generally and judge actions according to how well they satisfy these rules.

Rule utilitarianism is similar to Kantian deontology, but it still has a distinctly consequentialist flavour. It is true that both of these systems result (if followed perfectly) in someone rigidly following a set of rules without making any exceptions. The difference, however, is in the attitude of the individual. Whereas Kant would call an action good only if done for the right reasons, rule utilitarians call actions that follow their rules good regardless of the motivation.

The rules that arise can also look different from Kantian deontology, depending on the beliefs of the person coming up with the rules. If she’s a neo-reactionary who believes that only autocratic states can lead to the common good, she’ll come up with a very different set of rules than Immanuel Kant did.

First Order Utilitarianism?

All of the systems described here are what I’ve taken to calling first order utilitarianism. They only explicitly consider the direct effects of actions, not any follow-on effects that may happen years down the road. Second-order utilitarianism is a topic for another day.

Other Value Functions?

This is just a survey of some of the possible value functions a utilitarian can have. If you’re interested in utilitarianism in principle but feel like all of these value functions are lacking, I encourage you to see what other ones exist out there.

I’m going to be following this post up with a post on precedent utilitarianism, which solved this problem for me.

Epistemic Status: Ethics