Many, including me, have relied on Max Weber’s definition of a state as “the rule of men over men based on the means of legitimate, that is allegedly legitimate violence”. I thought that violence was synonymous with power and that the best we could hope for was a legitimate exercise of violence, one that was proportionate and used only as a last resort.
I have a blog post about state monopolies on violence because of Hannah Arendt. Her book Eichmann in Jerusalem: A Report on the Banality of Evil was my re-introduction to moral philosophy. It, more than any other book, has informed this blog. To Arendt, thinking and judging are paramount. It is not so much, to her, that the unexamined life is not worth living. It is instead that the unexamined life exists in a state of mortal peril, separated only by circumstances from becoming one of the “good Germans” who did nothing as their neighbours were murdered.
This blog is my attempt to think and to judge. To take moral positions, so that I am in the habit of it.
It’s a vulnerable spot, to stake out a position. You must always live with the risk of being later proved wrong. Or, perhaps worse, having been proved wrong before you even set pen to paper (or pixels to screen).
In her essay On Violence, Hannah Arendt demolished the premises upon which I based my own essay on how states should use their monopoly on violence. It’s rare that I get to see my own work so completely rendered useless. I found the process both useful and humbling.
On Violence is divided into three sections. In the first, Arendt covers how violence has been used and thought about in the decade preceding her essay (it was published in 1969). In the second, she lays out new definitions and models for strength, violence, power, and authority and challenges the definitions use by the great thinkers of the past. In the final section, she re-examines the recent events of her time in light of her definitions and discusses the promise and danger of power and violence.
So, enter the end of the 1960s. The past decade has seen student sit-ins and protests at practically every university. It has seen the end of official segregation and the ongoing struggles of the civil rights movement. In Europe, a military coup toppled the French Fourth Republic and liberalization in Czechoslovakia led to an invasion by Soviet tanks. In Vietnam, America took up France’s failing war and found themselves unable to defeat a small cadre of revolutionaries.
Against this backdrop, Arendt remarks on the most dangerous fact of all: that through our artifice, we have attained the means (i.e. nuclear weapons) to destroy ourselves. There is, Arendt remarks, an age-old conflict between means and ends, in that means always threaten to overshadow the ends they seek to bring about.
Given that there is always an element of chance when it comes to attaining our ends, nuclear weapons mark the development of a new era, where means dominate ends because all means are so terrifying and all ends so uncertain. When you asked a youth in the 1960s where they hoped to be in the future, they would always preface an answer with “well, assuming I am still alive…”.
None of this was made more comforting by the many commonplace myths Arendt identified. Among the think tanks and the military industrial complex, she saw a tendency to transmute hypotheses into reality, to believe that possibilities identified using only reason (and no evidence) could become universal truths; the people in charge of the nuclear weapons did not believe their ends to be at all uncertain, despite all evidence to the contrary. Among the left, she noticed a glorification of violence that had no place in the texts of Marx (let alone in a movement supposedly built on freedom and compassion). The left, Arendt worried, was imbuing violence with all sorts of properties that it had never had, like ‘creativity’, or ‘the ability to heal’.
It is important to note that Arendt had no time for talk of violent revolutions. To her (as she claims, it was with Marx), “dreams never come true”; violence against an oppressor was just violence, not a transformative force capable of launching a new era. In this, she had the weight of recent bitter history on her side, as the communist revolutions were revealed to have brought about nothing but tyranny.
It is only after laying out this tortured landscape, full of pitfalls and dangers, that Arendt turned to the philosophy of violence, the main purpose of this essay.
The first part of this examination is an observation: philosophers and politicians, from the left to the right, have, for a long time, identified violence as a mere outgrowth or component of power. Arendt trots out a dizzying array of quotes, all as plausible as the Max Weber quote I opened with but coming from the likes of C. Wright Mills, Sartre, Sorrel, Jouvenel, Voltaire, von Clausewitz, Mao Zedong, John Stuart Mill, and Hobbes.
It is against all of these definitions, which equate power with violence (and especially coercive violence that propagates the will of whomever wields it) that Arendt stands. She instead seeks a positive power in the philosophy (seldom actually achieved) of the revolutions of the 1700s (and the earlier ideal of polis life, deeply flawed as it was in practice), which viewed government of “man over man” as no fit way to live. In this framework, she identifies power, as distinct from violence, with “the rules of the game”, the set of socially acceptable actions. If you step outside of these rules, power manifests as social consequences: entreaties to change, glares, angry words, and in the extreme case, shunning
This definition is not non-coercive. To social creates like us, social punishments are real punishments. They may not be violence, but they can still act to change our will; or even to shape what we can will.
What prevents the “rules of the game” from being a tyranny (albeit a tyranny with majority support) of another name is some sort of democracy, some ability for people broadly to gain power and push; the chance to have a hand in writing the rules we all must play by. To use the language of the great revolutions of the 1700s, this is “the consent of the governed”.
If you doubt the existence of power as Arendt defines it, I challenge you to go to some public place and violate its norms. Any sufficient violation of norms should see the public exercise their power on you and will probably force you to stop. It is intensely hard for us humans to go against the will of a group, especially if that group makes it displeasure known. And it rarely even needs to come to anything as overt as glares; power is invisible, until you sense its boundaries. It’s a rare person who can act, knowing that they will immediately face intense social censure for their actions. It’s recognizing this, when so few others have, that marks Arendt’s brilliance.
(Interestingly, if you were to complete this challenge, the norms that you violate would most likely be norms that you otherwise agree with. The rules of the game are supposed to exist to make us feel happy and satisfied, able to interact with each other without fear. Personhood is an interface that carries expectations in order to receive recognition.)
Power will always be less absolute than violence. You obey a criminal with a gun far more readily than you obey the law, because the criminal (or rather, the gun) has an immediacy that power does not possess. Therefore, a law without popular support can be enforced, but only at the barrel of the gun. The violence of the enforcement will overwhelm the power of the majority.
Note the use of majority here, because that word is important in Arendt’s conception; to her, power will always require a majority. From this and from the immediacy of violence, it follows that the only way a minority can enforce their will on a majority is via violence.
Once you conceive of power as “the simple rules of the game”, it is clear how much weaker the tyrant is than the body politic. Tyranny falls apart as soon its few enforcers refuse to wield the weapons necessary for its survival, because there is no back up, nothing else, that can maintain it. Power can survive the complete annihilation of the government, because the government is its mere outgrowth, not its heart.
That said, if we are concerned with the ability of tyrants to rule through violence, we should be fearful of the continual improvements we are making to the implements of violence. It is not, as you might think, simply that the implements have become more destructive. There is as much space between the knight and the peasant with a pitchfork as there is between the man with a rifle and the stealth bomber, which is to say that the tyrant has always outclassed the revolutionary.
The true danger is rather how modern implements of violence allow the tyrant to shrink their inner circle and yet still maintain their monopoly on violence. Automation has made violence more efficient, not yet to the pathological case where one man with a button and an army of robots can hold a whole nation in fear, but there is a sense we are fast approaching that terrifying state.
If tyranny shows how violence can unmake power, it is rebellions that show how power can overshadow violence. Rebellions are successful when the state has lost its grip on power, not when the rebels win on the battlefield. Armed rebellions are often made needless by the very fact of their existence, because rebels can only arm themselves when the gatekeepers of weapons decide they no longer wish to support the state. When the army refuses the demands of the strongman, the regime is already over. Armed rebellions succeed more because they erode the power of the state to the point where no one will back it than as a result of any decisive war of manoeuvres.
There is, of course, room for state violence outside of the extremes. Like in the case of tyranny, Arendt considers state violence to be the opposite of state power. It emerges only when power has failed (e.g. when power alone is not enough to keep a criminal “playing by the rules of the game”) or when power is breaking down (e.g. the police being called on to disperse protestors marching on the government). Because of this, Arendt believes that (democratic) states should not be defined by violence, which is only theirs in exigency.
The interaction between power and violence is a topic Arendt returns to over and over in this section. She also believes, that violence flips power on its head (“the extreme form of power is All against One, the extreme form of violence is One against All”) – and steadily erodes it. I’m not entirely sure what the mechanism is supposed to be here though; it could be that when everyone sees violence as the quickest way to their ends, the structures of power – the incentive to play by the rules of the game in order to change them – disappears. Or it could be that violence leads to violence in return, as everyone tries to protect themselves without being able to resort to power. Regardless, the outcome is the same.
Terror is the result of violence that destroys all power and then fails to abdicate. The Soviet government provides one of the clearest examples of terror. After it shattered society, it seeded it with informants. This meant that no one could seek out others to organize power, because there was always the fear that you might be conspiring with an informant. Russia, I think, is still grappling with this total destruction of all power. It is unclear to me if it is at all capable of returning to rule based on power, rather than (in some part, at least) violence.
Nonviolent resistance movements, like Gandhi’s, work only when the government is scared of the corrosive effects of violence. Sit-ins and salt marches would have been met with massacres if used against the Soviets or Nazis, but against a British government that feared the results of becoming reliant on violence, they were successful.
(The British were right to fear violence. After all, it was soldiers tasked with “pacifying” the colonies that launched the coup d’état that ended the French Fourth Republic. Arendt strongly believed that relying on violence abroad would erode power at home, probably as a result of this experience, not to mention the violence used to quell anti-war demonstrators in America.)
These ideas provide the conceptual framework for Arendt to re-examine what was then recent history and justify why the theorist still has a right to talk about these things.
Arendt pauses to explain that she feels the need to justify her right to speak on these subjects, because of what she claims is an ongoing tendency to explain human behaviour in terms of animal behaviour. Scientists, says Arendt, are increasingly expanding the scope of which behaviours should be considered “natural”, which is to say, the same as other animals would exhibit. Tied into this is a nascent and seldom spoken belief, that reason requires us to sever some of these vestiges of our animal nature.
Arendt disagrees strenuously with both the premise and the prescription. First, she believes that it is wrong to say that we are proved to be more and more like animals. Instead, it is more correct to say that animals are proved to be more and more like us. It is still us that has the singular faculty for reason, but it is certainly amusing and interesting to see all of the ways in which we are not as alone upon our pedestal as we once assumed.
(I think she makes this distinction because if we are like animals, then the study of human nature belongs to the biologist. But if animals are like us, then human nature is still the domain of the philosopher. It’s a subtle difference, but to her, a very important one.)
When it comes to removing human capacity – like for rage – Arendt sees nothing but dehumanization. Rage, she explains, can be rational. We rage when we suspect something could be done but it is not. Rage is turned not against the volcano, but against the heavens for failing to prevent it, or the government for failing to protect us.
(I have been known to view critiques of science like this, from non-scientists, with suspicion. I think Arendt gets a pass because it is clear that her disagreements with science aren’t based on a fear of science disproving one of her specific political positions. Arendt is good at this in general; in an appendix, she cautions against a scientific meritocracy without using any of the tired and silly arguments people normally resort to.)
Rage and violence can also be a rational reaction to hypocrisy (if reason is a trap, why step into it?), although Arendt is quick to point out that this can backfire in two ways (when seeking out hypocrisy becomes an end into itself, as during The Terror; when violence is used to provoke violence and therefore “reveal” a hypocrisy that never existed).
To be honest, I’m not sure many people are arguing that scientists should remove fundamental characteristics of people anymore. But it strikes me as the sort of thing people plausibly could have argued about in the past. And it seemed worth noting that Arendt sees a (limited) role for violence or anger in politics (although it is also worth noting that she views violence per se as outside of the political sphere, because it has nothing to do with power). And finally, I should mention that like practically everyone, she views violence in self-defence as justified.
But Arendt does find many justifications of violence to be foolish. She cautions against “natural” metaphors for power, those that associate it with outward growth and fecundity. Once you accept these, she believes, you also accept that violence has the power of renewal. Violence clears away the bounds on power and breathes new life into it by allowing it to expand again (imagine the analogy to forest fire, which clears away dead wood and lets a new forest grow). Given all of the follies and pains of empire, it is clear that even if this were true (and she is not convinced that it is), it is not recommended. Power, to Arendt, is perfectly content without expansion (and indeed, violent expansion, to her, always erodes power and replaces it with violence).
Nowhere does she find violence more dangerous then with respect to racism. On racist ideologies, she says:
Racism, as distinguished from race, is not a fact of life, but an ideology, and the deeds it leads to are not reflex actions, but deliberate acts based on pseudo-scientific theories. Violence in interracial struggle is always murderous, but it is not “irrational”; it is the logical and rational consequence of racism, by which I do not mean some rather vague prejudices on either side, but an explicit ideological system.
(To make it perfectly clear, she means “rational” here to read only as internal consistency, not external consistency.)
Luckily, power can overcome prejudices. The non-violent actions of the Civil Rights Movement are one of her best examples of the fruits of power, which broke apart segregation and ended (for a time) most restrictions at the ballot box.
That said, even here does Arendt see some role for limited political violence (I am using this to mean what it normally does, but should acknowledge Arendt would view this particular word combination as an oxymoron). She acknowledges that sometimes, it is only through the violence of the radical that the moderate is given a hearing. Unfortunately, beyond cautions that violence is useful only for short-term objectives and that it is indiscriminate in its ends (that is to say, it is a poor tool for systemic change, because it is as likely to gain token concessions as real change), Arendt offers no real framework with which to evaluate when violence might be justified.
Such a framework would be especially useful when evaluating violence against bureaucracy, a major theme of the last section. Arendt identifies bureaucracy as the force with which the student movements are fighting and claims that it is tempting to resort to violence when dealing with it because bureaucracy can leave you with no one to argue with and no avenue through which to gather and use power.
It is because of this that Arendt stands against the “progressive” goal of centralization and instead prefers federalism. This is interesting to me, because Arendt is normally identified as a leftist and her writing quotes Marx heavily. It is a testament to the contempt with which she holds bureaucracy (no doubt heavily influenced by her work analyzing the bureaucracy of the Nazis) that she views striking against it as more important than the progressive priorities that can be attained via centralization and bureaucracy.
Or perhaps it is just that Arendt’s leftist views are actually quite heterodox; there’s a certainly a way to read her that suggests hostility to the welfare state and a preference (perhaps for reasons grounded in a desire to promote virtue and human connection?) for communal charity on a more local scale as a replacement.
Arendt acknowledges that bureaucracy has made the “impossible possible” (e.g. the landings on the moon), but she believes that this has come at the cost of making daily tasks (like governing) impossible.
To this conundrum, she offers no answer. This, I think, is very characteristic of Arendt. It’s very easy to see what she opposes, but hard to find a model of government for which she advocates. I often find her criticism incredibly insightful, so this curious stopping short, her refusal to recommend any specific action, is often frustrating.
As it is, all I’m left with are fears. The trends she laid out – the dangers of our means overshadowing our ends and the ossification that comes with bureaucracy – have not gone away. If anything, they’ve intensified. And while this book gave me a new model of power and violence, I’m not quite sure what to do with it.
But then, Arendt would probably say there’s no point in trying to do something with it alone. Power can only come in groups. And her students are probably supposed to talk with others, to share our concerns, and to think about what we can do together, to keep the world running a little longer.
The modern field of linguistics dates from 1786, when Sir Willian Jones, a British judge sent to India to learn Sanskrit and serve on the colonial Supreme Court, realized just how similar Sanskrit was to Persian, Latin, Greek, Celtic, Gothic, and English (yes, he really spoke all of those). He concluded that the similarities in grammar were too close to be the result of chance. The only reasonable explanation, he claimed, was the descent of these languages from some ancient progenitor.
This ancestor language is now awkwardly known as Proto-Indo-European (PIE). It and the people who spoke it are the subject of David Anthony’s book The Horse The Wheel And Language . I picked up the book hoping to learn a bit about really ancient history. I ended up learning some of that, but this is more a book about linguistics and archeology than about history.
Proto-Indo-European speakers produced no written works, so almost all of their specific history is lost. The oldest products of their daughter languages – like the Rig Veda – date from well after the last speakers of the original language passed away.
Instead of the history that is largely barred to us, this book is really Professor David Anthony attempting to figure out who these speakers were and what their lives looked like, without the benefit of any written words. He does this via two channels: their language, and the physical remains of their culture.
Unfortunately, there is at least one glaring problem with each approach. Their language is thoroughly dead and there was (at the time of writing) no scholarly consensus on where they originated.
Professor Anthony is undaunted by these problems. It turns out that we can reconstruct their language and from that reconstruction, determine where they most likely lived. If both approaches are done properly, it should be possible to see archeological details reflected in their language and details of their language reflected in their remains.
The first problem to solve then is the reconstruction of PIE. How does one do this?
Well it turns out that all languages change in similar ways. The way we pronounce consonants often shift, with hard sounds sometimes changing into soft sounds, but very rarely the reverse. How we say words also changes. Assimilation occurs because we tend to omit difficult to pronounce or inconvenient middle syllables (this has led to the invention of contractions in English) and addition happens because we add syllables in the middle of difficult tongue movements (compare the “proper” and colloquial ways of pronouncing the word “nuclear” or the difference between the French athlète and the English athlete).
It would be very odd for an additional syllable to be added in an area where tongue movements aren’t particularly hard, or a syllable to be removed from a word that is typically enunciated. Above all, these changes are regular because they rely on predictable laziness.
Changes tend to happen to many words at once. When people began to hear the Proto-French tsentum (root of cent, the French word for 100) as different from the Latin kentum, they had to make a decision about how exactly it would be pronounced. They chose a soft-c, a sound Latin lacks, but that is easier to say. This change got carried over to every ts-, c-, or k-, that had previously made the same sound as kentum/tsentum, except those before a back vowel (like “o”), presumably because a soft sound there is actually harder to say .
There’s one final type of change that Anthony mentions: analogy. This is where a grammatical rule used in a single place (e.g. pluralization with -s or -es) is expanded to encompass many more words or cases (most English nouns were originally pluralized with other suffixes, or with stem changes like “geese”; it was only later that people decided -s and -es would be the general markers of plural nouns).
If you have a large sample of languages descended from a historical language (and with Proto-Indo-European, there really is no lack), you can follow a bunch of words backwards through likely changes and see if they all end up in the same place.
If you do this for the modern words for “hundred” from many PIE daughter languages, you’re left with *km’tom (an asterisk is used before sounds where there is no direct evidence). All words for hundred in modern descendants (as well as dead ancient descendants that we know how to speak) of Proto-Indo-European can be derived from *km’tom using only well-attested to and empirically observed rules of language change.
(I occasionally got chills reading reconstructed words. It’s amazing how some words that our distant ancestors spoke thousands upon thousands of years ago are fairly well preserved in our modern speech.)
This is pretty cool, because it allows us to start seeing which words were common enough in Proto-Indo-European to be passed down to all daughters and which words were borrowed in.
With a reconstructed vocabulary of about 1,500 words, we can figure out some things that were important to Proto-Indo-Europeans. They seem to have words for relatives on the male side, but not the female side. This suggests that after marriage, the wife moved in with the groom. Less domestically, they seemed to have a word for cattle rustling, suggesting that they weren’t unfamiliar with increasing their wealth at the expense of their neighbours’.
That’s not all we can get from their words. Linguists also believe that Proto-Indo-Europeans had chiefs, who in turn had patrons. They worshipped a male sky deity and sacrificed horses and cattle to him. They formed warrior bands. They avoided speaking the name of the bear. They drove, or knew of, wagons. And they had two words that we could translate as sacred, “that which is forbidden” and “that which is imbued with holiness”.
(There are many more minor cultural touchstones scattered throughout the book. I don’t want to spoil them all.)
We also know the animals and plants they had words for. Reconstructed PIE has words for temperate trees, horses and cows, bees and honey.
These give us clues to where they lived, in the same way that knowing the words “shinney”, “hockey”, “Zamboni” and “creek” are spoken somewhere might help you make a guess as to where that somewhere is.
And while these words help us rule out the Mediterranean and the deserts, they don’t give us much in the way of a specific location without a when, which requires two different methods.
First, we can figure out the approximate death of Proto-Indo-European, the approximate century or millennium when it was entirely splintered into its daughters, by using what linguists have discovered about the rate of language change.
While most vocabulary changes rather quickly, making this a poor tool for dating very old languages, there are a group of words, the core vocabulary, that change much more slowly. The core vocabulary of any language is only a couple hundred words, but they’re some of the most important ones. Normally, core vocabulary includes the words for: body parts, small numbers, close relatives, a few basic needs, a couple of natural features or domesticated animals, some pronouns, and some conjunctions.
English, a prolific borrower, has borrowed 50% of its total vocabulary from the romance languages. It’s core vocabulary, however, is largely free of this borrowing, with only 4% of core vocabulary words borrowed from romance languages.
Core vocabulary changes by about 14-19% every thousand years depending on the language. It’s also known that once two dialects differ by more than 10% of their core vocabulary, they are more properly thought of as separate languages.
Here’s where written language comes in handy. By comparing written inscriptions with known creation dates in different daughter languages, we can make a guess as to when the languages diverged.
The oldest inscriptions in a PIE-derived language are in the Anatolian languages (which were spoken in what is now Turkey). However, Anthony chooses not to use these, because they entirely lack many grammatical innovations that are otherwise common in daughter languages. This leads him to believe that they split away much earlier than other daughters. The presence of later shared innovations means that at the time of the Anatolian split, Proto-Indo-European was probably still a living language and still evolving.
Better candidates are archaic Greek and Old-Indic, both of which have inscriptions dated to around 1,450 BCE. By comparing the differences in wording and grammar between these two and using known rates of change, Anthony dates the end of Proto-Indo-European at around 2,500 BCE. This means that after 2,500 BCE, it doesn’t make sense to speak of a single unified Proto-Indo-European language.
Second is the birth date, the other half of the critical window. To find it, Anthony looks for words that have a known date of invention, specifically “wool” and “wagon”. Getting broadly useful amounts of wool from sheep wasn’t possible until a mutation made sheep coats much larger. We know roughly when this mutation occurred, because sheep suddenly became a larger portion of herds around 3,500 BCE, displacing goats (which produce more milk). The only reasonably explanation for this event is the advent of wool producing sheep, which were very valuable as a source of clothes.
Similarly, wagons have left physical evidence (both directly and in preserved images) and that evidence has been carbon dated to 3,500 BCE .
Since all Proto-Indo-European languages outside of the Anatolian branch have related words for both “wagon” and “wool” that show no evidence of borrowing from other languages, it seems reasonable to conclude that some form of the language existed when wagons and wool first began to reshape the pre-historic world. That means the language had to exist by 3,500 BCE.
There is, I should note, one competing theory that Anthony outlines, in which PIE and Indo-Hittite languages split around 7,500 BCE. This theory requires several unlikely things to happen however; it requires the word for wagon to evolve from the same verb meaning “to turn” in both branches (five similar verbs existed), it requires the PIE speaking people to disperse over all of Europe and become the dominant culture then (this would have been very hard pre-horse domestication, when material cultures were small and language territories tended to be much smaller than modern countries), and all of this would have to happen while material cultures were becoming very different but languages (supposedly) weren’t evolving.
Anthony doesn’t give this theory much credence.
With a rough time-range, we can begin looking for our Proto-Indo-Europeans in space. Anthony does this by looking for evidence of very old loan words. He finds a set coming from Uralic, which also has a bevy of very old loanwords from PIE .
Uralic (appropriately) probably first emerged somewhere near the Ural Mountains. This corresponds well with our other evidence because the area around the Urals (where borrowing could have taken place) is temperate and home to the flora and fauna words we know exist in PIE.
The PIE word for honey, *médhu (note its similarity with the English word for a fermented honey drink, “mead” ), is particularly useful here. We know that bees weren’t common in Siberia during the time when we suspect PIE was being spoken (and where they were common, the people weren’t herders), but that bees were common on the other side of the Urals.
Laying it all out, we see that PIE speakers were herders (there’s an expansive set of words relating to the tasks herders must accomplish), who lived near the Urals but not in Siberia. The best archeological match for these criteria is a set of herder people who lived in what is now modern-day Ukraine and it is these people that Anthony identifies as the Proto-Indo-Europeans.
If this feels at all dry, I want to assure you that it wasn’t when I read it. I felt that the first section of the book was the strongest. Anthony provides an excellent overview of linguistics, archeology, and some of the crazy stuff he’s had to invent to help him in his studies.
For example, he believes that horses were ridden much earlier than was commonly thought, perhaps around or before 3,500 BCE. To prove this, him and his wife embarked on a study of how bits wear teeth in horses’ mouths, which culminated in empirical studies with a variety of bit types (including rope) done on live horses that had never been previously given bits, assessed using electron microscopy. The whole thing is a bit bonkers, but it has resulted in a validated test that allows archeologists to determine if a given horse was ever ridden, as well as vindication for Anthony’s chronology of domestication.
Unfortunately, a lot of the rest of the book was genuinely dry. There was a dizzying array of cultures inhabiting the Eurasian steppes in the period Anthony covers, each with their own house type, pottery type, antecedents, and descendants. Anthony goes through these in excruciating detail. It’s the sort of thing that other archeologists love him for – a lot of these cultures are very poorly described outside of Russian language publications – but it’s hard for a lay-person to follow. I may have pulled it off if I built a giant flow chart, but as it was, I mostly felt overwhelmed.
(Anthony has to go through them all to explain how PIE-derived languages ended up everywhere we know them to have. People of Europe don’t speak PIE-derived languages just because of Latin. Many people the Romans conquered spoke languages that were distantly related to the invader’s tongue. Those languages need to be accounted for in any theory about Proto-Indo-Europeans.)
This is disappointing, because the history started off so engagingly. Anthony outlines how the earliest ancestors of the Proto-Indo-Europeans had persistent cultural frontiers with hunter-gatherers on the Urals on one side and the farmers in the Bug-Dniester valley on the other.
The herding and farming economies required a moral shift from previous hunter-gatherer practices, one that would see agriculturalists harden their hearts to their own children starving, if the only thing that could assuage their hunger was their last few breeding pairs or their seed grain. This is the first time I saw someone lay out the moral transformation necessary to accept agricultural and having it laid out so starkly made it much easier to understand why not every pre-historic group was willing to adopt it.
(I had always thought the biggest moral change was accepting accumulation of wealth, but this one is, I think, more important.)
This is not to say that the herders and farmers were exactly alike; their different ways of life meant they were culturally distinct. In addition to their dwellings and material culture, they differed in funeral customs and probably in religion. Everything we know about early-PIE speakers suggest that they worshipped a sky god of some sort. The farmers who lived next door decorated their houses with female figurines, figures that never show up in any excavation of herder camps or grave sites.
I was also shocked at the amount of long distance trade and the wealth acquisition that was going on 6,000 years ago. There are kurgans (circular rock topped graves) with grave goods from Mesopotamia dating from that long ago, as well as one kurgan where someone was buried with almost 4 kilograms of gold ornamentation.
The herders and farmers didn’t live next door in harmony forever. Changes to their stable arrangement happened as a result of one of the Earth’s period historical climate fluctuations (which caused a collapse among many of the farmers and may have led to more raiding from the early-PIE speaking herders) and later the adoption of horse-riding (which made raiding easier) and wagons (which allowed herders to bring water with them and opened the inner steppes up to grazing).
Larger herds and changing boundaries led to clashes among the herders (we’ve found kurgans where the bodies bear marks of violent deaths) and to raids on agriculturalists (we’ve found burned villages peppered with arrows), although interestingly, never the farmers directly adjacent to the steppes. It may be that the herders didn’t want to disrupt their trading relationships with their neighbours and so were careful to raid dozens of kilometers away from their own borders (a task made easier with horses).
The farmers were no pushovers; some of their towns held up to 10,000 people by the third millennium BCE. These towns were bigger than the cities of Mesopotamia, but lacked the civic organizational features of the true cities of the Fertile Crescent.
And it was at about this point in the narrative where the number of cultures proliferated beyond my ability to follow and I began writing down interesting facts rather than keeping track of the grand narrative.
Here are a few that I liked the most:
About 20% of corpses in warrior graves (those with weapons and other symbols of membership in warrior society) whose gender is known are female. This matches the percentage in much later steppe graves. As Kameron Hurley said, women have always fought.
Contrary to popular stereotypes, the cultures of the Eurasia steppes weren’t reliant on cities for manufactured goods. They had their own potters and metalsmiths and they made many mining camps. In fact, by the 2000s BCE, it seems that Mesopotamian cities were dependent metal mined on the steppes,
In the early Bronze Age, tin was worth its weight in silver. When tin wasn’t available, bronze was made with arsenic.
Horses were probably domesticated because they winter better than the other animals that were available in Eurasia at the time. Cows will starve to death if grass is hidden by snow, while sheep and goats use their nose to move snow off of grass (which means that they’re helpless once it’s covered in ice). Sheep, cows, and goats are all unable to drink water that is covered in ice. Horses break ice and move snow with their hooves, making winter no real inconvenience to them. Mixing horses with cows can allow cows to eat the grass that horses uncover.
Disaffected farmers may have been attracted to the herding economy because wealth was much easier to build up. Farmland is hard to acquire more of without angering your neighbours, but herds given good pasture will naturally grow exponentially. A lot of the spread of the herding economy into Europe probably used some sort of franchise system, where locals joined the PIE culture and were given some animals, in exchange for providing protection and labour to their patron.
I’ve struggled through a lot of books that are clearly meant for people more knowledgeable in the subject than I am. It might just be a function of how interested I am in archeology (that is to say: only tolerably interested) that this is the first of them that I wish had an abridged edition. If you aren’t deeply interested in archaeology or pre-history, there’s a lot of this book that you’ll probably end up skimming.
The rest of it makes up for that. But I think there would be market for Anthony to write another leaner volume, meant for a more general audience.
If he ever does, I’ll probably give it a read.
 David Anthony is very sensitive to the political ends that some scholars of Proto-Indo-European have turned to. He acknowledges that white supremacists appropriated the self-designation of “Aryan” used by some later speakers of PIE-derived languages and used it to refer to some sort of ancient master race. Professor Anthony does not buy into this one bit. He points out that Aryan was always a cultural term, not a racial one (showing the historical ignorance of the racists) and he is careful to avoid assigning any special moral or mythical virtue to the Proto-Indo-Europeans whose culture he studies.
White supremacists will find nothing to like about this book, unless they engage in a deliberate misreading. ^
 This is why the French côte is still similar to the Latin costa. ^
 Anthony identifies improvements in carbon dating, especially improvements in how we calibrate for diets high in fish (which contain older carbon, leading to incorrect ages) as a major factor in his ability to untangle the story of the Proto-Indo-Europeans. ^
 Uralic is the language family that in modern times includes Finnish and some languages spoken in Russia. ^
 While looking up the word *médhu, I found out that it is also likely the root of the Old Chinese word for honey, via an extinct Proto-Indo-European language, Tocharian. The speakers of Tocharian migrated from the Proto-Indo-European homeland to Xinjiang, in what is now China, which is likely where the borrowing took place. ^
[Warning: Contains spoilers for The Sunset Mantle, Vorkosigan Saga (Memory and subsequent), Dune, and Chronicles of the Kencyrath]
For the uninitiated, Sanderson’s Law (technically, Sanderson’s First Law of Magic) is:
An author’s ability to solve conflict with magic is DIRECTLY PROPORTIONAL to how well the reader understands said magic.
Brandon Sanderson wrote this law to help new writers come up with satisfying magical systems. But I think it’s applicable beyond magic. A recent experience has taught me that it’s especially applicable to fantasy cultures.
Sunset Mantle is what’s called secondary world fantasy; it takes place in a world that doesn’t share a common history or culture (or even necessarily biosphere) with our own. Game of Thrones is secondary world fantasy, while Harry Potter is primary world fantasy (because it takes place in a different version of our world, which we chauvinistically call the “primary” one).
Secondary world fantasy gives writers a lot more freedom to play around with cultures and create interesting set-pieces when cultures collide. If you want to write a book where the Roman Empire fights a total war against the Chinese Empire, you’re going to have to put in a master’s thesis worth of work to explain how that came about (if you don’t want to be eviscerated by pedants on the internet). In a secondary world, you can very easily have a thinly veiled stand-in for Rome right next to a thinly veiled analogue of China. Give readers some familiar sounding names and culture touchstones and they’ll figure out what’s going on right away, without you having to put in effort to make it plausible in our world.
When you don’t use subtle cues, like names or cultural touchstones (for example: imperial exams and eunuchs for China, gladiatorial fights and the cursus honorum for Rome), you risk leaving your readers adrift.
Many of the key plot points in Sunset Mantle hinge on obscure rules in an invented culture/religion that doesn’t bear much resemblance to any that I’m familiar with. It has strong guest rights, like many steppes cultures; it has strong charity obligations and monotheistic strictures, like several historical strands of Christianity; it has a strong caste system and rules of ritual purity, like Hinduism; and it has a strong warrior ethos, complete with battle rage and rules for dealing with it, similar to common depictions of Norse cultures.
These actually fit together surprising well! Reiss pulled off an entertaining book. But I think many of the plot points fell flat because they were almost impossible to anticipate. The lack of any sort of consistent real-world analogue to the invented culture meant that I never really had an intuition of what it would demand in a given situation. This meant that all of the problems in the story that were solved via obscure points of culture weren’t at all satisfying to me. There was build up, but then no excitement during the resolution. This was common enough that several chunks of the story didn’t really work for me.
Here’s one example:
“But what,” asked Lemist, “is a congregation? The Ayarith school teaches that it is ten men, and the ancient school of Baern says seven. But among the Irimin school there is a tradition that even three men, if they are drawn in together into the same act, by the same person, that is a congregation, and a man who has led three men into the same wicked act shall be put to death by the axe, and also his family shall bear the sin.”
All the crowd in the church was silent. Perhaps there were some who did not know against whom this study of law was aimed, but they knew better than to ask questions, when they saw the frozen faces of those who heard what was being said.
(Reiss, Alter S.. Sunset Mantle (pp. 92-93). Tom Doherty Associates. Kindle Edition.)
This means protagonist Cete’s enemy erred greatly by sending three men to kill him and had better cut it out if he doesn’t want to be executed. It’s a cool resolution to a plot point – or would be if it hadn’t taken me utterly by surprise. As it is, it felt kind of like a cheap trick to get the author out of a hole he’d written himself into, like the dreaded deux ex machina – god from the machine – that ancient playwrights used to resolve conflicts they otherwise couldn’t.
(This is the point where I note that it is much harder to write than it is to criticize. This blog post is about something I noticed, not necessarily something I could do better.)
I’ve read other books that do a much better job of using sudden points of culture to resolve conflict in a satisfying manner. Lois McMaster Bujold (I will always be recommending her books) strikes me as particularly apt. When it comes time for a key character of hers to make a lateral career move into a job we’ve never heard of before, it feels satisfying because the job is directly in line with legal principles for the society that she laid out six books earlier.
The job is that of Imperial Auditor – a high powered investigator who reports directly to the emperor and has sweeping powers – and it’s introduced when protagonist Miles loses his combat career in Memory. The principles I think it is based on are articulated in the novella Mountains of Mourning: “the spirit was to be preferred over the letter, truth over technicalities. Precedent was held subordinate to the judgment of the man on the spot”.
Imperial Auditors are given broad discretion to resolve problems as they see fit. The main rule is: make sure the emperor would approve. We later see Miles using the awesome authority of this office to make sure a widow gets the pension she deserves. The letter of the law wasn’t on her side, but the spirit was, and Miles, as the Auditor on the spot, was empowered to make the spirit speak louder than the letter.
Wandering around my bookshelves, I was able to grab a couple more examples of satisfying resolutions to conflicts that hinged on guessable cultural traits:
In Dune, Fremen settle challenges to leadership via combat. Paul Maud’dib spends several years as their de facto leader, while another man, Stilgar, holds the actual title. This situation is considered culturally untenable and Paul is expected to fight Stilgar so that he can lead properly. Paul is able to avoid this unwanted fight to the death (he likes Stilgar) by appealing to the only thing Fremen value more than their leadership traditions: their well-established pragmatism. He says that killing Stilgar before the final battle would be little better than cutting off his own arm right before it. If Frank Herbert hadn’t mentioned the extreme pragmatism of the Fremen (to the point that they render down their dead for water) several times, this might have felt like a cop-out.
In The Chronicles of the Kencyrath, it looks like convoluted politics will force protagonist Jame out of the military academy of Tentir. But it’s mentioned several times that the NCOs who run the place have their own streak of honour that allows them to subvert their traditionally required oaths to their lords. When Jame redeems a stain on the Tentir’s collective honour, this oath to the college gives them an opening to keep her there and keep their oaths to their lords. If PC Hodgell hadn’t spent so long building up the internal culture of Tentir, this might have felt forced.
It’s hard to figure out where good foreshadowing ends and good cultural creation begins, but I do think there is one simple thing an author can do to make culture a satisfying source of plot resolution: make a culture simple enough to stereotype, at least at first.
If the other inhabitants of a fantasy world are telling off-colour jokes about this culture, what do they say? A good example of this done explicitly comes from Mass Effect: “Q: How do you tell when a Turian is out of ammo? A: He switches to the stick up his ass as a backup weapon.”
(Even if you’ve never played Mass Effect, you now know something about Turians.)
At the same time as I started writing this, I started re-reading PC Hodgell’s The Chronicles of the Kencyrath, which provided a handy example of someone doing everything right. The first three things we learn about the eponymous Kencyr are:
They heal very quickly
They dislike their God
Their honour code is strict enough that lying is a deadly crime and calling some a liar a deathly insult
There are eight more books in which we learn all about the subtleties of their culture and religion. But within the first thirty pages, we have enough information that we can start making predictions about how they’ll react to things and what’s culturally important.
When Marc, a solidly dependable Kencyr who is working as a guard and bound by Kencyr cultural laws to loyally serve his employer lets the rather more eccentric Jame escape from a crime scene, we instantly know that him choosing her over his word is a big deal. And indeed, while he helps her escape, he also immediately tries to kill himself. Jame is only able to talk him out of it by explaining that she hadn’t broken any laws there. It was already established that in the city of Tai-Tastigon, only those who physically touch stolen property are in legal jeopardy. Jame never touched the stolen goods, she was just on the scene. Marc didn’t actually break his oath and so decides to keep living.
God Stalk is not a long book, so that fact that PC Hodgell was able to set all of this up and have it feel both exciting in the moment and satisfying in the resolution is quite remarkable. It’s a testament to what effective cultural distillation, plus a few choice tidbits of extra information can do for a plot.
If you don’t come up with a similar distillation and convey it to your readers quickly, there will be a period where you can’t use culture as a satisfying source of plot resolution. It’s probably no coincidence that I noticed this in Sunset Mantle, which is a long(-ish) novella. Unlike Hodgell, Reiss isn’t able to develop a culture in such a limited space, perhaps because his culture has fewer obvious touchstones.
Sanderson’s Second Law of Magic can be your friend here too. As he stated it, the law is:
The limitations of a magic system are more interesting than its capabilities. What the magic can’t do is more interesting than what it can.
Similarly, the taboos and strictures of a culture are much more interesting than what it permits. Had Reiss built up a quick sketch of complicated rules around commanding and preaching (with maybe a reference that there could be surprisingly little theological difference between military command and being behind a pulpit), the rule about leading a congregation astray would have fit neatly into place with what else we knew of the culture.
Having tight constraints imposed by culture doesn’t just allow for plot resolution. It also allows for plot generation. In The Warrior’s Apprentice, Miles gets caught up in a seemingly unwinnable conflict because he gave his word; several hundred pages earlier Bujold establishes that breaking a word is, to a Barrayaran, roughly equivalent to sundering your soul.
It is perhaps no accident that the only thing we learn initially about the Kencyr that isn’t a descriptive fact (like their healing and their fraught theological state) is that honour binds them and can break them. This constraint, that all Kencyr characters must be honourable, does a lot of work driving the plot.
This then would be my advice: when you wish to invent a fantasy culture, start simple, with a few stereotypes that everyone else in the world can be expected to know. Make sure at least one of them is an interesting constraint on behaviour. Then add in depth that people can get to know gradually. When you’re using the culture as a plot device, make sure to stick to the simple stereotypes or whatever other information you’ve directly given your reader. If you do this, you’ll develop rich cultures that drive interesting conflicts and you’ll be able to use cultural rules to consistently resolve conflict in a way that will feel satisfying to your readers.
There are many problems that face modern, developed economies. Unfortunately, no one agrees with what to do in response to them. Even economists are split, with libertarians championing deregulation, while liberals call for increased government spending to reduce inequality.
Or at least, that’s the conventional wisdom. The Captured Economy, by Dr. Brink Lindsey (libertarian) and Dr. Steven M. Teles (liberal) doesn’t have much time for conventional wisdom.
It’s a book about the perils of regulation, sure. But it’s a book that criticizes regulation that redistributes money upwards. This isn’t the sort of regulation that big pharma or big finance wants to cut. It’s the regulation they pay politicians to enact.
And if you believe Lindsey and Teles, upwardly redistributing regulation is strangling our economy and feeding inequality.
They’re talking, of course, about rent-seeking.
Now, if you don’t read economic literature, you probably have an idea of what “rent-seeking” might mean. This idea is probably wrong. We aren’t talking here about the sorts of rents that you pay to landlords. That rent probably includes some economic rents (quite a lot of economic rents if you live in Toronto, Vancouver, San Francisco, or New York), but does not itself represent an economic rent.
An economic rent is any excess payment due to scarcity. If you control especially good land and can grow wheat at half the price of everyone else, the rent of this land is the difference between how much it costs you to grow wheat and how much it costs everyone else to grow wheat.
Rent-seeking is when someone tries to acquire these rents without producing anything of value. It isn’t rent-seeking when you invent a new mechanical device that cuts your costs in half (although your additional profit will represent economic rents). It is rent-seeking when you use some of those profits as “campaign contributions” to get the government to pass a law that requires all future labour-saving devices need to be “tested” for five years before they can be introduced. Over that five-year period, you’ll reap rents because no one else can compete with you to bring the price of the goods you are producing down.
How could we know if rent-seeking is happening in the US economy (note: this book is written specifically about the US, so assume all statements here are about the US unless otherwise noted) and how can we tell what it’s costing?
Well, one of the best signs of rent-seeking is increased profits. If profits are increasing and this can’t be explained by innovation or productivity growth or any other natural factor, then we have circumstantial evidence that profits are increasing from rent-seeking. Is this the case?
Lindsey and Teles say yes.
First, it seems like profits for US firms are increasing, from a low of 3% in the 1980s to a high of 11% currently. These are average profits, so they can’t be swayed by one company suddenly becoming much more efficient – as something like that should be cancelled out by a decline in profits at somewhere less efficient.
At the same time, however, the majority of these new profits have been going to companies that were already very profitable. If being very profitable makes corrupting the political process easier, this is exactly what we’d expect to see.
In addition, formation of new companies has slowed, concentration has increased, the ratio of intangible assets to tangible assets has increased, and yet spending on intangible assets (like R&D) has dropped. The only intangibles you get without investing in R&D are better human capital (but then why should profits increase if this is happening everywhere?) and tailor-made regulation.
Lindsey and Teles go on to cite research by Dr. James Bessen that show that most of the increases in profits since the start of the 21st century is heavily correlated with increasing regulation, a result that remained robust even when accounting for reverse causation (e.g. a counter-factual where profits causing regulation).
This circumstantial evidence is about all we can get for something as messy as real-world economics, but it’s both highly suggestive and fits in well with what keen observers have noted in individual industries, like the pharmaceutical industry.
An increase in rent-seeking would explain a whole bunch of the malaise of the current economy.
Economists have been surprised by the slow productivity growth since the last recession. If there was significantly more rent-seeking now than in the past, then we would expect productivity growth to slow.
In a properly functioning economy, productivity growth is largely buoyed up by new entrants to a field. The most productive new entrants thrive, while less productive new entrants (and some of the least productive existing players) fail. Over time, this gradually improves the overall productivity of an industry. This is the creative destruction you might hear economists talking glowingly about.
Productivity can also be raised by the slow diffusion of innovations across an industry. When best practices are copied, everyone ends up producing more with fewer inputs.
Rent-seeking changes the nature of this competition. Instead of competing on productivity and innovation, companies compete to see who can most effectively buy the government. Everyone who fails to buy off the government will eventually fail, leaving an increasingly moribund economy behind.
Lindsey and Teles believe that we’re more likely to see the negative effects of rent-seeking today than in the past because the underlying economy has less favourable conditions. In the 1950s, women started to enter the workforce. In the 60s, Boomers began to enter it. In addition, many returning soldiers got university educations after World War II, making college graduates much more common.
Therefore, rent-seeking, as a force holding down productivity growth, would be a serious problem in political economy even if it didn’t lead to increased inequality and all of the problems that can cause.
But that’s where the other half of this book comes in; the authors suggest that our current spate of rent-seeking policies are fueling income inequality as well as economic malaise . Rent-seeking inflates stock prices (which only helps people who are well-off enough that they own stocks) or wages at the top of corporations. Rents from rent-seeking also tend to accrue to skilled workers, to people who own homes, and people in regulated professions. All of these people are wealthier than average and increasing their wealth increases inequality.
That’s the theory. To show it in practice, Lindsey and Teles introduce four case studies: finance, intellectual property, zoning, and occupational licensing.
Whenever I think about finance, I am presented with a curious double image. There are the old-timey banks of yore, that I see in movies, the ones that provided smiling service to their local customers. And then there are the large financial entities that exist today, with their predatory sales tactics and “too big to fail” designations. Long gone are the days when banks mostly made money by collecting interest on loans, loans made possible by paying interest on deposits.
Today’s banks also have an excellent racket going on. They decry taxes and regulation on one hand, while extracting huge rents from governments on the other.
To understand why, we first need to talk about leverage. Bank profits can be increased many times over via the magic of leverage – basically borrowing money to buy assets. If you believe, for example, that the price of silver is going to skyrocket tomorrow, you could buy $100 of silver. If silver goes up by 20%, you’ll pocket a cool $20 for 20% profit. If you borrow an extra $900 from friends and family at 1% interest and buy silver with that too, you’ll pocket a cool $191 once it goes up (20% of $1000 less 1% of $900), for 191% profit.
Leverage becomes a problem when prices fall. If the price goes down by 10% instead of going up, you’ll be left with $90 if you didn’t leverage yourself – and $1 if you did. Because it leads to the potential of outsized losses, leverage presents problems with downside risks, the things that happen when your bet is wrong.
One of the major ways banks extract rents is by forcing the government to hold onto their downside risks. In America, this is accomplished several ways. First, deposits are insured by the government. This is good, in that it prevents bank runs , which were a significant problem in the 19th and 20th century, but bad because it removes most incentive for consumers to care about the lending practices of their bank. Insurance removes the risk associated with picking a bank with risky lending practices, so largely people don’t bother to see if their bank is responsible or not. Banks know this, so feel no pressure to be responsible, especially because shareholders love the profits irresponsibility brings in good times.
Second, the government (especially in America, but also recently in Ireland) seems unable to resist insulating bondholders from the consequences of backing a bank with bad standards. The bailouts after the financial crisis mean that few bondholders were punished for their failure to do due diligence when providing the credit banks used to make leveraged bets. As long as no one is punished for lending to the banks that make risky bets, things won’t get better.
(Interestingly, there is theoretical work that shows banks can accomplish everything they currently do with debt using equity at the same cost. This isn’t what we see in real life. Lindsey and Teles suggest this is because debt is kept artificially cheap for banks by repeated bailouts. Creditors don’t demand extra to lend to an indebted bank, because they know they won’t have to pay if things go south.)
Third, there’s mortgage debt, which is often insured or bought by the Federal Government in America. This makes risky lending much more palatable for many banks (and much more profitable as well). This whole process is really opaque and largely hidden from the US population. When times are good, it’s a relatively cheap way to make housing more affordable (although somewhat regressive; it favours the already wealthy). When times are bad it can cost the government almost $200 billion.
The authors suggest that this sort of “public program by kluge” is the perfect vehicle for rent-seeking. The need to do the program in a klugey way so that taxpayers don’t complain is anathema to accountability and often requires the support of businesses – which are happy to help as long as they get to skim off the top. Lindsey and Teles suggest that it would be much better for the US just to provide straight up housing subsidies in a means-tested way.
Being able to extract all these rents has probably increased the size of the US financial sector. Linsey and Teles argue that this is a very bad thing. They cite data that show decreased economic growth once the financial sector grows beyond a certain size, possibly because an outsized financial sector leads to misallocation of resources.
Beyond a certain point, the financial sector is just moving money around to no productive aim (this is different than e.g. loans to businesses; I’m talking about highly speculative bets on foreign currencies or credit default swaps here). The financial sector also aggressively recruits very bright people using very high salaries. If the financial sector were smaller and couldn’t compensate as highly, then these people would be out doing something productive, like building self-driving cars or curing malaria. Lindsey and Teles suggest that we should happily make a trade-off whereby these people can’t get quite as high salaries but do actually produce things of value.
(Remember: one of the pair here is a libertarian! Like “worked for Cato Institute for years” libertarian. If your caricature of libertarians is that “they hate poor people”, I suggest you consider the alternative: “they think the free market is the best way to help disadvantaged people find better circumstances”. Here, Lindsey is trying to correct market failures and misallocations caused by big banks getting too cozy with the government.)
Intellectual Property Law
If you don’t follow the Open Source or Creative Commons movements, you probably had mostly positive things to say about copyright until a few years ago when the protests against SOPA and PIPA – two bills designed to strengthen copyright enforcement – painted the internet black in opposition.
SOPA and PIPA weren’t some new overreach. They are a natural outgrowth of a US copyright regime that has changed radically from its inception. In the early days of the American Republic, copyrights required registering. Doing so would give you a fourteen-year term of exclusivity, with the option to extend it once for another fourteen years. Today all works, even unpublished ones, are automatically granted copyright for the life of the author… plus 70 years.
Penalties have increased as well; previously, copyright infringement was only a civil matter. Now it carries criminal penalties of up to $250,000 in fines and 1-5 years of jail time per infringement.
Patent protections have also become onerous, although here the fault is judicial action, not statute. Appeals for patent cases are solely handled by the United States Court of Appeals for the Federal Circuit. This court is made up of judges who are normally former patent lawyers and who attend all the same conferences as patent lawyers – and eat the food paid for by the sponsors. I don’t want to claim judicial corruption, but it is perhaps unsurprising that these judges have come to see the goals of patent holders as right and noble.
Certainly, they’ve broken with past tradition and greatly expanded the scope of patentability while reducing the requirements for new patents. Genes, business methods, and most odiously, software, have been made patentable. Consequently, patents filed have increased from approximately 60,000 yearly in 1983 to 300,000 per year by 2013. If this represented a genuine increase in invention, then it would be a cause for celebration. But we already know that R&D spending isn’t increasing. It would be very surprising – and the exact opposite of what diminishing returns would normally suggest – if companies managed to come up with an additional 240,000 patents per year with no additional real spending.
What if these patents just came from increased incentives for rent-seeking via the intellectual property system?
“Intellectual property” conjures a happy image. Who doesn’t like property ? Many (most?) people support paying authors, artists, and inventors for their creations, at least in the abstract . Lindsey and Teles argue that we should instead take a dim view of intellectual property; to them, it’s almost entirely rent-seeking.
They point out that many of supposed benefits of intellectual property never manifest. It’s unclear if it spurs invention (evidence from World Fairs suggest that it just moves invention towards whatever types of inventions are patentable, where payoff is more certain). It’s unclear if it incentivizes artists and writers (although we’ve seen music revenue fall, more people than ever are producing music). My personal opinion is that copyright doesn’t encourage writers; most of us couldn’t stop if we wanted to.
When it comes to software patents, the benefits are even less clear and the harms even greater. OECD finds that software patents are associated with a decrease in R&D spending, while Vox reports that costs associated with software patent lawsuits have now reached almost $70 billion annually. The majority of software patent litigation isn’t even launched by the inventors. Instead, it’s done by so called “patent trolls”, who buy portfolios of patents and then threaten to sue any company who doesn’t settle with them over “infringement”.
When even a successfully-defended lawsuit can cost millions of dollars (not to mention several ulcers), software patents (often for obvious ideas and assuredly improper) held by trolls represent a grave threat to innovation.
All of this adds up to a serious drag on the economy, not to mention our culture. While “protecting property” is seen as a noble goal by many, Lindsey and Teles argue that IP protections go well beyond that. They acknowledge that it makes sense to protect a published work in its entirety. But protecting the setting? The characters? The right to make sequels? That’s surely too much. How is George Lucas hurt if someone can sell their Star Wars fanfiction? How is that “infringing” on what he has created?
They have less sympathy for patents, which grant a somewhat ridiculous monopoly. If you patent something three days before I independently invent it, then any use or sale by me is still considered infringement even though I am assuredly not ripping you off.
Lindsey and Teles suggest that IP laws need to be rolled back to a more reasonable state, when copyright was for 14 years and abstract ideas, software implementation, and business methods couldn’t be patented. About the only patents they really approve of are pharmaceutical patents, which they view as necessary to protect the large upfront costs of drug development (see also Scott Alexander’s argument for why this is the case); I’d like to add that these upfront costs would be lower if the rent-seeking by pharmaceutical companies hadn’t supported rent-seeking regulation that has made the FDA an almost impenetrable tar-pit.
Occupational licensing has definitely become more common. It’s gone from affecting 10% of the workforce (1970) to 30% of the workforce today. It no longer just affects doctors, teachers, lawyers, and engineers. Now it covers make-up artists, auctioneers, athletic trainers, and barbers.
Now, there are sometimes good reasons to license professionals. No one wants to drive across a bridge built by someone who hasn’t learned anything about physics. But there’s good reason to suspect that much of the growth of occupational licensing isn’t about consumer protection, despite what proponents say.
First of all, there’s often a quite a bit of variability in how many days of study these newly licensed professions require. Engineering requirements tend to be similar from country to country because it’s governed by international treaty. On the other hand, manicurist requirements vary wildly by state; Alaska requires three days of education, while Alabama requires 163. There’s no national standards at all. If this was for consumer protection, then presumably some states are well below what’s required and others are well above it.
Second, there’s no allowance for equivalencies. Engineers can take their engineering degrees anywhere and can transfer professional status with limited hassles. Lawyers can take the bar exam wherever they want. But if you get licensed as a manicurist in Alabama, Alaska won’t respect the license. And vice versa.
(Non-transferability is a serious economic threat in its own right, because it makes people less likely to move in search of better conditions. The section on zoning further explains why this is bad.)
Several studies have shown that occupational licenses do nothing to improve services to customers. Randomly sampled floral arrangements from licensed and unlicensed states (yes, some states won’t let you arrange flowers without a license) are judged the same when viewed by unsuspecting judges. Roofing quality hasn’t fallen after hurricanes, when licensing restrictions are lifted (and if there’s ever a time you’d expect quality to fall, it’s then!).
Despite the lack of benefits, there are very real costs to occupational licensing. Occupational licensing is associated with consumers paying prices between 5% and 33% above unlicensed areas, which translates to an average 18% increase in wages for licensed professionals. The total yearly cost to consumers for this price gouging? North of $200 billion. Unfortunately, employment growth is also affected. Licensed professions see 20% slower employment growth compared to neighbouring unlicensed jurisdictions. Licensing helps some people make more money, but they make this money by, in essence, pulling up the ladder to prosperity behind them.
Occupational licensing especially hurts minorities in the United States. Many occupational licenses require a college degree (black and Latino Americans are less likely to have college degrees) and they often exclude anyone with a criminal record of any sort (disproportionately likely to be black or Latino). It may make sense to exclude people with criminal records from certain jobs. But from manicuring? I don’t see how someone could do worse damage manicuring then they could preparing fast food, and that isn’t regulated at all.
Licensing boards often protect their members against complaints from the public. Since the board is composed only of members of the profession, it’s common for them to close ranks around anyone accused of bad conduct. The only profession I’ve seen that doesn’t do this is engineers. Compare the responses of professional boards to medical and engineering malpractice in Canada.
Probably the most interesting case of rent-seeking Lindsey and Teles identify are lawyers in the United States. While they accuse lawyers of engaging in the traditional rent-seeking behaviour of limiting entry to their field (and point out that bar exam difficulty is proportional to the number of people seeking admittance, which suggests that its main purpose it to keep supply from rising), they also claim that lawyers in the United States artificially raise demands for their services.
Did you know that lawyers made up 41% of the 113th Congress, despite representing only 0.6% of the US population? I knew the US had a lot of lawyers in politics, but I hadn’t realized it was that high. Lindsey and Teles charge these lawyers with writing the kind of laws that make sense to lawyers: abstruse, full of minutia, and fond of adversarial proceedings. Even if this isn’t a sinister plot, it certainly is a nice perk .
I do wish this chapter better separated what I think is dual messages on occupational licensing. One strand of arguments goes: “occupational licensing for jobs like barbers, manicurists, etc. is keeping disadvantaged people, especially minorities out of these fields with slightly better than average wages and making everyone pay a tiny bit more”. The other is: “professionals are robbing everyone else blind because of occupational licensing; lawyers and doctors make a huge premium in the United States and are disproportionately wealthy compared to other countries and make up a large chunk of the 1%”.
I’d like them separated because they seem to call for separate solutions. We might decide that if we could fix the equality issues (for example, by scrapping criminal records checks and college degree requirements where they aren’t needed), it might make sense to keep occupational licensing to prevent a race for the bottom among occupations that have never represented a significant fraction of individual spending. One thing I noticed is that the decline among union membership is exactly mirrored by the increase in occupational licensing. In a very real way, occupational licensing, with some tweaks, could be the new unions.
On the other hand, we have doctors and lawyers (and maybe even engineers, although my understanding is that they do far less to restrict supply, especially foreign supply) who are making huge salaries that (in the case of lawyers) might be up to 50% rents from artificially low supply. If we undid some of the artificial barriers to entry they’ve thrown up, we could lower their wages and improve income equality while at the same time improving competition and opening up these fields (which should still pay reasonably well) to more people. Many of us probably know people who’d make perfectly fine doctors that have been kept out of medical school by the overly restrictive quotas. Where’s the harm in having two doctors making $90,000/year instead of one doctor making $180,000/year? It’s not like we couldn’t find a use for twice as many doctors!
The weirdest thing about the recent rise in housing prices is that building houses hasn’t really gotten any more expensive. Between 1950 and 1970, housing prices increased 35% above inflation (when normalized to size) and construction costs increased 28% above inflation. Between 1970 and 2000, construction prices rose 6% slower than inflation – becoming cheaper in real terms – and overall housing costs increased 72% above inflation.
Maybe house prices have gone up because house quality has improved? Not so say data from repeat house sales. When analyzing these data, economists have determined that increased house quality can account for at most 25% of the increase in prices.
Maybe land is just genuinely running out in major cities? Well, if that were the case, we’d see a strong relationship between density and price. After all, density would surely emerge if land were running out, right? When analyzing these data, economists have found no relationship between city density and average home price.
The final clue comes from comparing the value of land houses can be built on with the value of land houses cannot be built on. When you look at how much the size of a lot affects the sale price of very similar homes and compare that with the cost of the land that goes under a house (by subtracting construction costs from the sale prices of new homes), you’ll find that the land under a house is worth ten times the land that simply extends a yard.
This suggests that a major component of rising house prices is the cost of getting permission to build a house on land – basically, finding some of the limited supply of land zoned for actually building anything. This is not land value per se, but instead a rent imposed by onerous zoning requirements. In San Francisco, San Jose, and Manhattan, this zoning cost is responsible for approximately half of house worth.
The purpose of zoning has always been to protect the value of existing homes, by keeping “undesirable” land usage out of a neighbourhood. Traditionally, “undesirable” has been both racist and classist. No one in a well-off neighbourhood wanted any of “those people” to move there, lest prospective future buyers (who shared their racial and social prejudices) not want to move to the neighbourhood. Today, zoning is less explicitly racist (even if it still prices minorities out of many neighbourhoods) and more nakedly about preserving house value by preventing any increase in density. After all, if you live in a desirable neighbourhood, the last thing you want is a large tower bringing in hundreds of new residents at affordable prices. How will you be able to get a premium on your house then? The market will be saturated!
Now if there were no real benefits to living in a city, Lindsey and Teles probably wouldn’t care about zoning. But there definitely are very good reasons why we want more people to be able to live in cities. First: transportation. Transportation is easier when people are densely packed, which makes supplies cheaper and reduces negative externalities from carbon intensive travel. Second: choice. Cities have enough people to allow people to make profits off of weird things, to allow people to carefully choose their jobs, and to allow employers choice in employees. All of these are helpful to the economy. Third: ineffable increases in human capital. There’s just something about cities (theorized to be “information spillover” between people in unrelated jobs) that make them much more productive per capita than anywhere else.
This productivity is rewarded in the form of higher wages. Lindsey and Teles claim that the average income of a high school graduate in Boston is 40% higher than the average income of a college graduate in Flint, Michigan. I’ll buy these data, but I’m a bit skeptical that this results in any more take-home pay for the Bostonian, because wages in Boston have to be higher if people are to live there. Would this hold true if you looked at real wages accounting for differences in cost of living ?
If wages are genuinely higher in places like Boston in real terms, then this spatial inequality should be theoretically self-correcting. People from places like Flint should all move to places like Boston, and we’ll see a sudden drop in income inequality and a sudden jump in standard of living for people who only have high school degrees. Lindsey and Teles believe this isn’t happening because the scarcity of housing drives up the initial price of moving far beyond what people without substantial savings can pay – the same people who most need to be able to move .
Remember, many apartments require first and last month’s rent, plus a security deposit. I looked up San Francisco on PadMapper and the median rent looks to be something like $3300, a number that agrees with a cursory Google. Paying first and last on that, plus a damage deposit would cost you over $7,000. Add to that moving expenses, and you can see how it could be impossible for someone without savings to move to San Francisco, even if they could expect a relatively well-paid job.
(Lack of movement hurts people who stay behind as well. When people move away in search of higher wages, businesses must eventually raise wages in places seeing a net drain of people, lest the whole workforce disappear. This effect probably led to some of the convergence in average income between states that occurred from 1880 to 1980, an effect that has now markedly slowed.)
Out of all of these examples of rent-seeking, the one I feel least optimistic about is zoning. The problem with zoning is that people have bought houses at the prices that zoning guaranteed. If we were to significantly loosen it, we’d be ruining many people’s principle investment. Even if increasing home wealth represents one of the single greatest sources of inequality in our society and even if it is exacting a terrifying toll on our economy, it will be extremely hard to build the sort of coalition necessary to break the backs of municipalities and local landowners.
Until we figure out how to do that, I’m going to continue to fight back tears every time I see a sign like this one:
How do we fight rent-seeking?
Surprisingly, most of the suggestions Lindsey and Teles put forth are minor, pro-democratic, and pro-government. There isn’t a single call in here to restrict democracy, shrink the size of the government, or completely overhaul anything major. They’re incrementalist, pragmatic, and give me a tiny bit of hope we might one day even be able to conquer zoning.
Rent-seeking is easiest when democracy is opaque, when it is speedy, when it is polarized, and when it is difficult for independent organizations to supply high-quality information to politicians.
One of the right-wing policies that Lindsey and Teles are harshest on are efforts to slash and burn the civil service. They claim that this has left the civil service unable to come up with policies or data of its own. They’re stuck trusting the very people they seek to regulate for any data about the effects of their regulations.
Obviously, there are problems with this, even though it doesn’t seem to extend to outright horse-trading or data-manipulation. It’s relatively easy to nudge peoples’ decision making by choosing how data is presented. Just slightly overstate the risks and play down the benefits. Or anchor someone with a plan you know they’re primed to like and don’t present them any alternatives that would hurt your bottom line. No briefcases of money change hands, but government is corrupted nonetheless .
To combat this, Lindsey and Teles suggest that all committees in the US House and Senate should have a staffing budget sufficient to hire numerous staffers, some of whom would work for the committee as a whole and others who would work for individual members. Everything would get reshuffled every two years, with a rank-match system used to assign preferences. Employee quality would be ensured by paying market-competitive salaries and letting go anyone who was too-consistently ranked low.
(Better salaries would also end the practice of staffers going to work for lobbyists after several years, which isn’t great for rent-seeking.)
Having staff assigned to committees, rather than representatives on a permanent basis prevents representatives from diverting these resources to their re-election campaigns. It also might build bridges across partisan divides, because staff would be free from an us vs. them mentality.
The current partisan grip on politics can actually help rent-seeking. Lindsey and Teles claim that when partisanship is high, party discipline follows. Leaders focus on what the party agrees on. Unfortunately, neither party is in any sort of agreement with itself about combatting rent-seekers, even though fighting rent-seeking offers a compelling way to spur economic growth (ostensibly a core Republican priority) and decrease economic inequality (ostensibly a core Democratic priority).
If partisanship was less severe and the coalitions less uniform, leaders would have less power over their caucuses and representatives would search for ways to cooperate across the aisle whenever doing so could create wins for their constituents. This would mark a return to the “strange-bedfellows” temporary coalitions of bygone times. Perhaps one of these coalitions could be against rent-seeking ?
Lindsey and Teles also call for more issues to be decided in general jurisdictions where public interest and opportunity for engagement are high. They point to studies that show teachers can extract rents when budgets are controlled by school boards (which are obscure and easily dominated by unions). When schools are controlled by mayors, it becomes much harder for rents to be extracted, because the venue is much broader. More people care about and vote for municipal representatives and mayors than attend school board meetings.
Similarly, they suggest that we should very rarely allow occupation licensing to be handled by the profession itself. When a professional licensing body stacked with members of the profession decides standards, they almost always do it for their own interest, not for the interest of the broader public. State governments, one the other hand, are better at considering what everyone wants.
Finally, politics cannot be too quick. If it’s possible to go from drafting a bill to passing it in less time than it takes to read it, then it’s obviously impossible to build up a public pressure campaign to stop any nastiness in it. If bills required one day of debate for every hundred pages in them and this requirement (or a similar one) was inviolable, then if someone buried something nasty in it (say, a repeal of a nation’s prevailing currency standards), people would know, would be able to organize, and would be able to make the electoral consequences of voting for it clear to their representatives.
To get to a point where any of this is possible, Lindsey and Teles suggest building up a set of policies on the local, state, and national levels and working to build public support for them. With these policies existing in the sidelines, it will be possible to grab any political opportunity – the right scandal or outrage, perhaps – and pressure representatives to stand up against entrenched interests. Only in these moments when everyone is paying attention can we make it clear to politicians that their careers depend most on satisfying our desires than they do on satisfying the desires of the people who fund their campaign. Since these moments are rare, preparation for them is key. It isn’t enough to start looking for a solution when an opportunity presents itself. If we don’t move quickly, the rent-seekers will.
This book is, I think, the opening salvo in this war. Its slim and its purpose is to introduce people from across the political spectrum to the problem of rent-seeking and galvanize them to prepare for when the time is right. Its’ authors are high profile economists with major backing. Perhaps this is also a signal that similar backing might be available for anyone willing to innovate around anti-rent-seeking policy?
For my part, I had opposed rent-seeking because I knew it hurt economic growth. I hadn’t understood just how much it contributed to income inequality. Rent-seeking increases corporate profits, making capitalists far wealthier than labourers can ever hope to be. It inflates the salaries of already wealthy professionals at the cost of everyone else and locks people without college degrees out of all but the most moribund or dangerous parts of the job market. It leads bankers to speculate wildly, in a way that occasionally brings down the economy. And it makes the humble home-owners of last generation the millionaires of this one, while pricing millions out of what was once a rite of passage.
Lindsey and Teles convinced me that fighting rent-seeking is entirely consistent with my political commitments. Municipal elections are coming up and I’m committed to finding and volunteering for any candidate who is consistently anti-zoning. If none exists, then I’ll register myself. Winning almost isn’t the point. I want to be one of those people getting the word out, showing that alternatives to the current broken system is possible.
And when the time is right, I want to be there when those alternatives supplant the rent-seekers.
 Rent-seeking doesn’t necessarily have to lead to increased inequality. Strict immigration controls, monopolies, strong unions, and strict tariffs all extract rents. These rents, however, tend to distribute down or sideways, so don’t really increase inequality. ^
 Banks don’t keep enough money on hand to cover deposits entirely, because they need to lend out money to make money. If banks didn’t lend money, you’d have to pay them for the privilege of parking your money there. This means that banks run into a problem when everyone tries to withdraw their money at once. Eventually, there will be no more money and the bank will fail. This used to happen all the time.
Before deposits were insured, it was only rational to withdraw your money if you thought there was even a small chance of a bank run. If you didn’t withdraw your money from a bank without deposit insurance and a bank run happened, you would lose your whole deposit.
Bank architecture reflects this risk. Everything about the imposing facades of old banks is supposed to make you think they’re as stable as possible and so feel comfortable keeping your money there. ^
 I wonder if this generalizes? Would a parliament full of engineers be obsessed with optimization and fond of very clear laws? Would a parliament full of doctors spend a lot of time running a differential diagnosis on the nation? Certainly military dictators excel at seeing everyone as an enemy on whom force can be justifiably used. ^
 College graduates in the wealthiest cities make 61% more money than college graduates in the least wealthy cities, while people with only high school degrees make 137% more in the richest cities compared to the poorest cities. This suggests that it’s possible high school graduates are much better off in wealthy cities, but it could also be true that college graduates fall prey to money illusions or are willing to pay a premium to live in a place that provides them with many more opportunities for new experiences. ^
 I think there will also always be social factors preventing people from moving, but perhaps these factors would weigh less heavily if real wage differences between thriving cities and declining areas weren’t driven down by inflated real estate prices in cities. ^
 This is perhaps the most invidious – and unintended – consequence of Stephen Harper’s agenda for Canada. Cutting the long form census made it harder for the Canadian government to enact social policies (Harper’s goal), but if these sorts of actions aren’t checked, reversed, and guarded against, they also make rent-seeking much more likely. ^
 In Canadian politics, I have hope that some sort of housing affordability coalition could form between some members from left-leaning parties and some principled free-marketers. Michael Chong already has a plan to lower housing prices by getting the government out of the loan securitization business. No doubt banks wouldn’t enjoy this, but I for one would appreciate it if my taxes couldn’t be used to bail out failing banks. ^
There is perhaps no temptation greater to the amateur (or professional) historian than to take a set of historical facts and draw from them a grand narrative. This tradition has existed at least since Gibbon wrote The History of the Decline and Fall of the Roman Empire, with its focus on declining civic virtue and the rise of Christianity.
Obviously, it is true that things in history happen for a reason. But I think the case is much less clear that these reasons can be marshalled like soldiers and made to march in neat lines across the centuries. What is true in one time and place may not necessarily be true in another. When you fall under the sway of a grand narrative, when you believe that everything happens for a reason, you may become tempted to ignore all of the evidence to the contrary.
Instead praying at the altar of grand narratives, I’d like to suggest that you embrace the ambiguity of history, an ambiguity that exists because…
Context Is Tricky
Here are six sentences someone could tell you about their interaction with the sharing economy:
I stayed at an Uber last night
I took an AirBnB to the mall
I deliberately took an Uber
I deliberately took a Lyft
I deliberately took a taxi
I can’t remember which ride-hailing app I used
Each of these sentences has an overt meaning. They describe how someone spent a night or got from place A to place B. They also have a deeper meaning, a meaning that only makes sense in the current context. Imagine your friend told you that they deliberately took an Uber. What does it say about them that they deliberately took a ride in the most embattled and controversial ridesharing platform? How would you expect their political views to differ from someone who told you they deliberately took a taxi?
Even simple statements carry a lot of hidden context, context that is necessary for full understanding.
Do you know what the equivalent statements to the six I listed would be in China? How about in Saudi Arabia? I can tell you that I don’t know either. Of course, it isn’t particularly hard to find these out for China (or Saudi Arabia). You may not find a key written down anywhere (especially if you can only read English), but all you have to do is ask someone from either country and they could quickly give you a set of contextual equivalents.
Luckily historians can do the same… oh. Oh damn.
When you’re dealing with the history of a civilization that “ended” hundreds or thousands of years ago, you’re going to be dealing with cultural context that you don’t fully understand. Sometimes people are helpful enough to write down “Uber=kind of evil” and “supporting taxis = very left wing, probably vegan & goes to protests”. A lot of the time they don’t though, because that’s all obvious cultural context that anyone they’re writing to would obviously have.
And sometimes they do write down even the obvious stuff, only for it all to get burned when barbarians sack their city, leaving us with no real way to understand if a sentence like “the opposing orator wore red” has any sort of meaning beyond a statement of sartorial critique or not.
All of this is to say that context can make or break narratives. Look at the play “Hamilton”. It’s a play aimed at urban progressives. The titular character’s strong anti-slavery views are supposed to code to a modern audience that he’s on the same political team as them. But if you look at American history, it turns out that support for abolishing slavery (and later, abolishing segregation) and support for big corporations over the “little guy” were correlated until very recently. In the 1960s though 1990s, there was a shift such that the Democrats came to stand for both civil rights and supporting poorer Americans, instead of just the latter. Before this shift, Democrats were the party of segregation, not that you’d know it to see them today.
Trying to tie Hamilton into a grander narrative of (eventual) progressive triumph erases the fact that most of the modern audience would strenuously disagree with his economic views (aside from urban neo-liberals, who are very much in Hamilton’s mold). Audiences end up leaving the paly with a story about their own intellectual lineage that is far from correct, a story that may cause them to feel smugly superior to people of other political stripes.
History optimized for this sort of team or political effect turns many modern historians or history writers into…
Gaps in context, or modern readers missing the true significance of gestures, words, and acts steeped in a particular extinct culture, combined with the fact that it is often impossible to really know why someone in the past did something mean that some of history is always going to be filled in with our best guesses.
Professor Mary Beard really drove this point home for me in her book SPQR. She showed me how history that I thought was solid was often made up of myths, exaggerations, and wishful thinking on the parts of modern authors. We know much less about Rome than many historians had made clear to me, probably because any nuance or alternative explanation would ruin their grand theories.
When it comes to so much of the past, we genuinely don’t know why things happened.
I recently heard two colleagues arguing about The Great Divergence – the unexplained difference in growth rates between Europe and the rest of the world that became apparent in the 1700s and 1800s. One was very confident that it could be explained by access to coal. The other was just as confident that it could be explained by differences in property rights.
I waded in and pointed out that Wikipedia lists fifteen possible explanations, all of which or none of which could be true. Confidence about the cause of the great divergence seems to me a very silly thing. We cannot reproduce it, so all theories must be definitionally unfalsifiable.
But both of my colleagues had read narrative accounts of history. And these narrative accounts had agendas. One wished to show that all peoples had the same inherent abilities and so cast The Great Divergence as chance. The other wanted to show how important property rights are and so made those the central factor in it. Neither gave much time to the other explanation, or any of the thirteen others that a well trafficked and heavily edited Wikipedia article finds equally credible.
Neither agenda was bad here. I am in fact broadly in favour of both. Yet their effect was to give two otherwise intelligent and well-read people a myopic view of history.
So much of narrative history is like this! Authors take the possibilities they like best, or that support their political beliefs the best, or think will sell the best, and write them down as if they are the only possibilities. Anyone who is unlucky enough to read such an account will be left with a false sense of certainty – and in ignorance of all the other options.
Of course, I have an agenda too. We all do. It’s just that my agenda is literally “the truth resists simplicity“. I like the messiness of history. It fits my aesthetic sense well. It’s because of this sense, that I’d like to encourage everyone to make their next foray into history free of narratives. Use Wikipedia or a textbook instead of a bestselling book. Read something by Mary Beard, who writes as much about historiography as she writes about history. Whatever you do, avoid books with blurbs praising the author for their “controversial” or “insightful” new theory.
Leave, just once, behind those famous narrative works like “Guns, Germs, and Steel” or “The History of the Decline and Fall of the Roman Empire” and pick up something that embraces ambiguity and doesn’t bury messiness behind a simple agenda.
The Righteous Mind follows an argument structure I learned in high school debate club. It tells you what it’s going to tell you, it tells you it, then it reminds you what it told you. This made it a really easy read and a welcome break from The Origins of Totalitarianism, the other book I’ve been reading. Practically the very first part of The Righteous Mind proper (after the foreword) is an introduction to its first metaphor.
Imagine an elephant and a rider. They have travelled together since their birth and move as one. The elephant doesn’t say much (it’s an elephant), but the rider is very vocal – for example, she’s quick to apologize and explain away any damage the elephant might do. A casual observer might think the rider is in charge, because she is so much cleverer and more talkative, but that casual observer would be wrong. The rider is the press secretary for the elephant. She explains its action, but it is much bigger and stronger than her. It’s the one who is ultimately calling the shots. Sometimes she might convince it one way or the other, but in general, she’s buffeted along by it, stuck riding wherever it goes.
She wouldn’t agree with that last part though. She doesn’t want to admit that she’s not in charge, so she hides the fact that she’s mainly a press secretary even from herself. As soon as the elephant begins to move, she is already inventing a reason why it was her idea all along.
This is how Haidt views human cognition and decision making. In common terms, the elephant is our unconscious mind and the rider our consciousness. In Kahneman’s terms, the elephant is our System 1 and the rider our System 2. We may make some decisions consciously, but many of them are made below the level of our thinking.
Haidt illustrates this with an amusing anecdote. His wife asks him why he didn’t finish some dishes he’d been doing and he immediately weaves a story of their crying baby and barking incontinent dog preventing him. Only because he had his book draft open on his computer did he realize that these were lies… or rather, a creative and overly flattering version of the truth.
The baby did indeed cry and the dog indeed bark, but neither of these prevented him from doing the dishes. The cacophany happened well before that. He’d been distracted by something else, something less sympathetic. But his rider, his “internal press secretary” immediately came up with an excuse and told it, without any conscious input or intent to deceive.
We all tell these sorts of flattering lies reflexively. They take the form of slight, harmless embellishments to make our stories more flattering or interesting, or our apologies more sympathetic.
The key insight here isn’t that we’re all compulsive liars. It’s that the “I” that we like to think exists to run our life doesn’t, really. Sometimes we make decisions, especially ones the elephant doesn’t think it can handle (high stakes apologies anyone?), but normally decisions happen before we even think about them. From the perspective of Haidt, “I”, is really “we”, the elephant and its rider. And we need to be careful to give the elephant its due, even though it’s quiet.
Haidt devotes a lot of pages to an impassioned criticism of moral rationalism, the belief that morality is best understood and attained by thinking very hard about it. He explicitly mentions that to make this more engaging, he wraps it up in his own story of entering the field of moral psychology.
He starts his journey with Kohlberg, who published a famous account of the stages of moral reasoning, stages that culminate in rationally building a model of justice. This paradigm took the world of moral psychology by storm and reinforced the view (dating in Western civilization to the times of the Greeks) that right thought had to proceed right action.
Haidt was initially enamoured with Kohlberg’s taxonomy. But reading ethnographies and doing research in other countries began to make him suspect things weren’t as simple as Kohlberg thought. Haidt and others found that moral intuitions and responses to dilemmas differed by country. In particular, WEIRD people (people from countries that were Western, Educated, Industrialized, Rich, and Developed and most especially the most educated people in those countries) were very much able to tamp down feelings of disgust in moral problems, in a way that seemed far from universal.
For example, if asked if it was wrong for a family to eat their dog if it was killed by a car (and the alternative was burying it), students would say something along the lines of “well, I wouldn’t, but it’s gross, not wrong”. Participants recruited at a nearby McDonalds gave a rather different answer: “of course it’s wrong, why are you even asking”. WEIRD students at prestigious universities may have been working towards a rational, justice-focused explanation for morality, but Haidt found no evidence that this process (or even a focus on “justice”) was as universal as Kohlberg claimed.
That’s not to say that WEIRD students had no disgust response. In fact, trying to activate it gave even more interesting results. When asked to justify answers where disgust overpowered students sense of “well as long as no one was hurt” (e.g. consensual adult sibling incest with no chance of children), Haidt observed that people would throw up a variety of weak excuses, often before they had a chance to think the problem through. When confronted by the weakness of their arguments, they’d go speechless.
This made Haidt suspect that two entirely separate processes were going on. There was a fast one for deciding and a slower another for explanation. Furthermore, the slower process was often left holding the bag for the faster one. Intuitions would provide an answer, then the subject would have to explain it, no matter how logically indefensible it was.
Haidt began to believe that Kohlberg had only keyed in on the second, slower process, “the talking of the rider” in metaphor-speak. From this point of view, Kohlberg wasn’t measuring moral sophistication. He was instead measuring how fluidly people could explain their often less than logical moral intuitions.
There were two final nails in the coffin of ethical rationalism for Haidt. First, he learned of a type of brain injury that separated people from their moral intuitions (or as the rationalists might call them “passions”). Contrary to the rationalist expectation, these people’s lives went to hell, as they alienated everyone they knew, got fired from their jobs, and in general proved the unsuitability of pure reason for making many types of decisions. This is obviously the opposite of what rationalists predicted would happen.
Second, he saw research that suggested that in practical measures (like missing library books), moral philosophers were no more moral than other philosophy professors.
Abandoning rationalism brought Haidt to a sentimentalist approach to ethics. In this view, ethics stemmed from feelings about how the world ought to be. These feelings are innate, but not immutable. Haidt describes people as “prewired”, not “hardwired”. You might be “prewired” to have a strong loyalty foundation, but a series of betrayals and let downs early in life might convince you that loyalty is just a lie, told to control idealists.
Haidt also believes that our elephants are uniquely susceptible to being convinced by other people in face to face discussion. He views the mechanism here as empathy at least as much as logic. People that we trust and respect can point out our weak arguments, with our respect for them and positive feelings towards them being the main motive force for us listening to these criticisms. The metaphor with elephants kind of breaks down here, but this does seem to better describe the world as it is, so I’ll allow it.
Because of this, Haidt would admit that rationalism does have some purpose in moral reasoning, but he thinks it is ancillary and mainly used to convince other people. I’m not sure how testable making evolutionary conclusions about this is, but it does seem plausible for there to be selection pressure to make us really good at explaining ourselves and convincing others of our point of view.
As Haidt took this into account and began to survey peoples’ moral instincts, he saw that the ways in which responses differed by country and class were actually highly repeatable and seemed to gesture at underlying categories of people. After analyzing many, many survey responses, he and his collaborators came up with five (later six) moral “modules” that people have. Each moral module looks for violations of a specific class of ethical rules.
Haidt likens these modules to our taste-buds. The six moral tastes are the central metaphor of the second section of the book.
Not everyone has these taste-buds/modules in equal proportion. Looking at commonalities among respondents, Haidt found that the WEIRDer someone was, the less likely they were to have certain modules. Conservatives tended to have all modules in a fairly equal proportion, liberals tended to be lacking three. Libertarians were lacking a whopping four, which might explain why everyone tends to believe they’re the worst.
The six moral foundations are:
This is the moral foundation that makes us care about suffering and pain in others. Haidt speculates that it originally evolved in order to ensure that children (which are an enormous investment of resources for mammals and doubly so for us) got properly cared for. It was originally triggered only by the suffering or distress of our own children, but can now be triggered by anyone being hurt, as well as cute cat videos or baby seals.
An expanding set of triggers seems to be a common theme for these. I’ve personally speculated that this would perhaps be observed if the brain was wired for minimizing negative predictive error (i.e. not mistaking a scene in which there is a lion for a scene without a lion), rather than positive predictive error (i.e. not mistaking a scene without a lion for a scene with a lion). If you minimize positive predictive error, you’ll never be frightened by a shadow, but you might get eaten by a lion.
This is the moral foundation that makes us want everyone to do their fair share and makes us want to punish tax evaders or welfare cheats (depending on our political orientation). The evolutionary story given for this one is that it evolved to allow us to reap the benefits of two-way partnerships; it was an incentive against defecting.
This is the foundation that makes us rally around our politicians, community leaders, and sports teams, as well as the foundation that makes some people care more about people from their country than people in general. Haidt’s evolutionary explanation for this one is that it was supposed to ensure coherent groups.
This is the moral foundation that makes people obey their boss without talking back or avoid calling their parents by the first names. It supposedly evolved to allow us to forge beneficial relationships within hierarchies. Basically, it may have once been very useful to have people believe and obey their elders without question, (like e.g. when the elders say “don’t drink that water, it’s poisoned” no one does and this story can be passed down and keep people safe, without someone having to die every few years to prove that the water is indeed poisoned).
This is the moral foundation that makes people on the right leery of pre-marital sex and people on the left leery of “chemicals”. It shows up whenever we view our bodies as more than just our bodies and the world as more than just a collection of things, as well as whenever we feel that something makes us “spiritually” dirty.
The very plausible explanation for this one is that it evolved in response to the omnivore’s dilemma: how do we balance the desire for novel food sources with the risk they might poison us? We do it by avoiding anything that looks diseased or rotted. This became a moral foundation as we slowly began applying it to stuff beyond food – like other people. Historically, the sanctity moral framework was probably responsible for the despised status of lepers.
This moral foundation is always in tension with Authority/Subversion. It’s the foundation that makes us want to band together against and cast down anyone who is aggrandizing themselves or using their power to mistreat another.
Haidt suggests that this evolved to allow us to band together against “alpha males” and check their power. In his original surveys, it was part of Fairness/Cheating, but he found that separating it gave him much more resolving power between liberals and conservatives.
Of these six foundations, Haidt found that libertarians only had an appreciable amount of Liberty/Oppression and Fairness/Cheating and of these two, Liberty/Oppression was by far the stronger. While the other foundations did exist, they were mostly inactive and only showed up under extreme duress. For liberals, he found that they had Care/Harm, Liberty/Oppression, and Fairness/Cheating (in that order).
Conservatives in Haidt’s survey had all six moral foundations, like I said above. Care/Harm was their strongest foundation, but by having appreciable amounts of Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation, they would occasionally overrule Care/Harm in favour of one or another of these foundations.
Haidt uses these moral foundations to give an account of the “improbable” coalition between libertarians and social conservatives that closely matches the best ones to come out of political science. Basically, liberals and libertarians are descended (ideologically, if not filially) from those who embraced the enlightenment and the liberty it brought. About a hundred years ago (depending on the chronology and the country), the descendants of the enlightenment had a great schism, with some continuing to view the government as the most important threat to liberty (libertarians) and others viewing corporations as the more pressing threat (liberals). Liberals took over many auspices of the government and have been trying to use it to guarantee their version of liberty (with mixed results and many reversals) ever since.
Conservatives do not support this project of remaking society from the top down via the government. They believe that liberals want to change too many things, too quickly. Conservatives aren’t opposed to the government qua government. In fact, they’d be very congenial to a government that shared their values. But they are very hostile to a liberal, activist government (which is rightly or wrongly how conservatives view the governments of most western nations) and so team up with libertarians in the hopes of dismantling it.
It is an attraction to murder and treason which hides behind such perverted tolerance, for in a moment it can switch to a decision to liquidate not only all actual criminals but all who are “racially” predestined to commit certain crimes. Such changes take place whenever the legal and political machine is not separated from society so that social standards can penetrate into it and become political and legal rules. The seeming broad-mindedness that equates crime and vice, if allowed to establish its own code of law, will invariably prove more cruel and inhuman than laws, no matter how severe, which respect and recognize man’s independent responsibility for his behavior.
That said, it is possible for inconvenient or dangerous things to be true and their inconvenience or danger has no bearing on their truth. If Haidt saw his writings being used to justify or promote violence, he’d have a moral responsibility to decry the perpetrators. Accepting that sort of moral responsibility is, I believe, part of the responsibility that scientists who deal with sensitive topics must accept. I do not believe that this responsibility precludes publishing. I firmly believe that only right information can lead to right action, so I am on the whole grateful for Haidt’s taxonomy.
The similarities between liberals and libertarians extend beyond ethics. Both have more openness to experience and less of a threat response than conservatives. This explains why socially, liberals and libertarians have much more in common than liberals and conservatives.
The third and final section of The Righteous Mind further focuses on political tribes. Its central metaphor is that humans are “90% chimp, 10% bee”. It’s central purpose is an attempt to show how humans might have been subject to group selection and how our groupishness is important to our morality.
Haidt claims that group selection is heresy in evolutionary biology (beyond hive insects). I don’t have the evolutionary biology background to say if this is true or not, although this does match how I’ve seen it talked about online among scientifically literate authors, so I’m inclined to believe him.
Haidt walks through the arguments against group selection and shows how they are largely sensible. It is indeed ridiculous to believe that genes for altruism could be preserved in most cases. Imagine a gene that would make deer more likely to sacrifice itself for the good of the herd if it seemed that was the only way to protect the herd’s young. This gene might help more deer in the herd attain adulthood, but it would also lead to any deer who had it having fewer children. There’s certainly an advantage to the herd if some members have this gene, but there’s no advantage to the carriers and a lot of advantage to every deer in the herd who doesn’t carry it. Free-riders will outcompete sacrificers and the selfless gene will get culled from the herd.
But humans aren’t deer. We can be selfish, yes, but we often aren’t and the ways we aren’t can’t be simply explained by greedy reciprocal altruism. If you’ve ever taken some time out of your day to help a lost tourist, congratulations, you’ve been altruistic without expecting anything in return. That people regularly do take time out of their days to help lost tourists suggests there might be something going on beyond reciprocal altruism.
Humans, unlike deer, have the resources and ability to punish free riders. We expect everyone to pitch in and might exile anyone who doesn’t. When humans began to form larger and larger societies, it makes sense that the societies who could better coordinate selfless behaviour would do better than those that couldn’t. And this isn’t just in terms of military cohesion (as the evolutionary biologist Lesley Newson had to point out to Haidt). A whole bunch of little selfless acts – sharing food, babysitting, teaching – can make a society more efficient than its neighbours at “turning resources into offspring”.
A human within the framework of society is much more capable than a human outside of it. I am only able to write this and share it widely because a whole bunch of people did the grunt work of making the laptop I’m typing it on, growing the food I eat, maintaining our communication lines, etc. If I was stuck with only my own resources, I’d be carving this into the sand (or more likely, already eaten by wolves).
Therefore, it isn’t unreasonable to expect that the more successful and interdependent a society could become, the more it would be able to outcompete, whether directly or indirectly its nearby rivals and so increase the proportion of its conditionally selfless genes in the human gene pool.
Conditional selflessness is a better description of the sorts of altruism we see in humans. It’s not purely reciprocal as Dawkins might claim, but it isn’t boundless either. It’s mostly reserved for people we view as similar to us. This doesn’t need to mean racially or religiously. In my experience, a bond as simple as doing the same sport is enough to get people to readily volunteer their time for projects like digging out and repairing a cracked foundation.
The switch from selfishness to selflessly helping out our teams is called “the hive switch” by Haidt. He devotes a lot of time to exploring how we can flip it and the benefits of flipping it. I agree with him that many of the happiest and most profound moments of anyone’s life come when the switch has been activated and they’re working as part of a team.
The last few chapters are an exploration of how individualism can undermine the hive switch and several mistakes liberals make in their zeal to overturn all hierarchies. Haidt believes that societies have both social capital (the bounds of trust between people) and moral capital (the society’s ability to bind people to collective values) and worries that liberal individualism can undermine these to the point where people will be overall worse off. I’ll talk more about moral capital later in the review.
II – On Shaky Foundations
Anyone who reads The Righteous Mind might quickly realize that I left a lot of the book out of my review. There was a whole bunch of supporting evidence about how liberals and conservatives “really are” or how they differ that I have deliberately omitted.
You may have heard that psychology is currently in the midst of a “replication crisis“. Much (I’d crudely estimate somewhere between 25% and 50%) of the supporting evidence in this book has been a victim of this crisis.
Here’s what the summary of Chapter 3 looks like with the offending evidence removed:
Here’s an incomplete list of claims that didn’t replicate:
IAT tests show that we can have unconscious prejudices that affect how we make social and political judgements (1, 2, 3 critiques/failed replications). Used to buttress the elephant/rider theory of moral decisions.
Disgusting smells can make us more judgemental (failed replication source). Used as evidence that moral reasoning can be explained sometimes by external factors, is much less rational than we’d like to believe.
Babies prefer a nice puppet over a mean one, even when pre-verbal and probably lacking the context to understand what is going on (failed replication source). Used as further proof for how we are “prewired” for certain moral instincts.
People from Asian societies are better able to do relative geometry and less able to absolute geometry than westerners (failed replication source). This was used to make the individualistic morality of westerners seem inherent.
The “Lady Macbeth Effect” showed a strong relationship between physical and moral feelings of “cleanliness” (failed replication source). Used to further strengthen the elephant/rider analogy.
The proper attitude with which to view psychology studies these days is extreme scepticism. There are a series of bad incentives (it’s harder and less prestigious to publish negative findings, publishing is necessary to advance in your career) that have led to scientists in psychology (and other fields) to inadvertently and advertently publish false results. In any field in which you expect true discoveries to be rare (and I think “interesting and counter-intuitive things about the human brain fits that bill), you shouldn’t allow any individual study to influence you very much. For a full breakdown of how this can happen even when scientists check for statistical significance, I recommend reading “Why Most Published Research Findings Are False” (Ioannidis 2005).
Moral foundations theory appears to have escaped the replication crisis mostly unscathed, (as have Tverskey and Kahneman’s work on heuristics, something that made me more comfortable including the elephant/rider analogy). I think this is because moral foundations theory is primarily a descriptive theory. It grew out of a large volume of survey responses and represents clusters in those responses. It makes little in the way of concrete predictions about the world. It’s possible to quibble with the way Haidt and his collaborators drew the category boundaries. But given the sheer volume of responses they received – and the fact that they based their results not just on WEIRD individuals – it’s hard to disbelieve that they haven’t come up with a reasonable clustering of the possibility space of human values.
I will say that stripped of much of its ancillary evidence, Haidt’s attack on rationalism lost a lot of its lustre. It’s one thing to believe morality is mostly unconscious when you think that washing your hands or smelling trash can change how moral you act. It’s quite another when you know those studies were fatally flawed. The replication crisis fueled my inability to truly believe Haidt’s critique of rationality. This disbelief in turn became one of the two driving forces in my reaction to this book.
Haidt’s moral relativism around patriarchal cultures was the other.
III – Less and Less WEIRD
It’s good that Haidt looked at a variety of cultures. This is a thing few psychologists do. There’s historically been an alarming tendency to run studies on western undergraduate students, then declare “this is how people are”. This would be fine if western undergraduates were representative of people more generally, but I think that assumption was on shaky foundations even before moral foundation theory showed that morally, at least, it was entirely false.
Haidt even did some of this field work himself. He visited South America and India to run studies. In fact, he mentioned that this field work was one of the key things that made him question the validity of western individualistic morality and wary of morality that didn’t include the sanctity, loyalty, and authority foundations.
His willingness to get outside of his bubble and to learn from others is laudable.
There is one key way in which Haidt never left his bubble, a way which makes me inherently suspicious of all of his defences of the sanctity, authority, and loyalty moral foundations. Here’s him recounting his trip to India. Can you spot the fatal omission?
I was told to be stricter with my servants, and to stop thanking them for serving me. I watched people bathe in and cook with visibly polluted water that was held to be sacred. In short, I was immersed in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine.
It only took a few weeks for my dissonance to disappear, not because I was a natural anthropologist but because the normal human capacity for empathy kicked in. I liked these people who were hosting me, helping me, and teaching me. Wherever I went, people were kind to me. And when you’re grateful to people, it’s easier to adopt their perspective. My elephant leaned toward them, which made my rider search for moral arguments in their defense. Rather than automatically rejecting the men as sexist oppressors and pitying the women, children, and servants as helpless victims, I began to see a moral world in which families, not individuals, are the basic unit of society, and the members of each extended family (including its servants) are intensely interdependent. In this world, equality and personal autonomy were not sacred values. Honoring elders, gods, and guests, protecting subordinates, and fulfilling one’s role-based duties were more important.
Haidt tried out other moral systems, sure, but he tried them out from the top. Lois McMaster Bujold once had a character quip: “egalitarians adjust to aristocracies just fine, as long as they get to be the aristocrats”. I would suggest that liberals likewise find the authority framework all fine and dandy, as long as they have the authority.
Would Haidt have been able to find anything worth salvaging in the authority framework if he’d instead been a female researcher, who found herself ignored, denigrated, and sexually harassed on her research trip abroad?
It’s frustrating when Haidt is lecturing liberals on their “deficient” moral framework while simultaneously failing to grapple with the fact that he is remarkably privileged. “Can’t you see how this other society knows some moral truths [like men holding authority over woman] that we’ve lost” is much less convincing when the author of the sentence stands to lose absolutely nothing in the bargain. It’s easy to lecture others on the hard sacrifices society “must” make – and far harder to look for sacrifices that will mainly affect you personally.
It is in this regard that I found myself wondering if this might have been a more interesting book if it had been written by a woman. If the hypothetical female author were to defend the authority framework, she’d actually have to defend it, instead of hand-waving the defence with a request that we respect and understand all ethical frameworks. And if this hypothetical author found it indefensible, we would have been treated to an exploration of what to do if one of our fundamental ethical frameworks was flawed and had to be discarded. That would be an interesting conversation!
Not only that, but perhaps a female author would have given more pages to the observation that woman and children’s role in societal altruism was just as important as that of men (as child-rearing is a more reliable way to demonstrate and cash-in on groupishness than battle) have been fully explored, instead of relegated to a brief note at the end of the chapter on group selection. This perspective is genuinely new to me and I wanted to see it developed further.
Ultimately, Haidt’s defences of Authority/Subversion, Loyalty/Betrayal, and Sanctity/Degradation fell flat in the face of my Care/Harm and Liberty/Oppression focused moral compass. Scott Alexander once wrote about the need for “a solution to the time-limitedness of enlightenment that works from within the temporal perspective”. By the same token, I think Haidt fails to deliver a defence of conservatism or anything it stands for that works from within the liberal Care/Harm perspective. Insofar as his book was meant to bridge inferential gaps and political divides, this makes it a failure.
That’s a shame, because arguments that bridge this divide do exist. I’ve read some of them.
IV – What if Liberals are Wrong?
There is a principle called “Chesterton’s Fence”, which comes from the famed Catholic conservative and author G.K. Chesterton. It goes like this: if you see a fence blocking the road and cannot see the reason for it to be there, should you remove it? Chesterton said “no!”, resoundingly. He suggested you should first understand the purpose of the fence. Only then may you safely remove it.
There is a strain of careful conservatism that holds Chesterton’s fence as its dearest parable. Haidt makes brief mention of this strain of thought, but doesn’t expound on it successfully. I think it is this thought and this thought only that can offer Care/Harm focused liberals like myself a window into the redeeming features of the conservative moral frameworks.
Here’s what the argument looks like:
Many years ago, western nations had a unified moral framework. This framework supported people towards making long term decisions and acting in a pro-social manner. There are many people who want to act differently than they would if left to their own devices and this framework helped them to do that.
Liberals began to dismantle this system in the sixties. They saw hierarchies and people being unable to do the things they wanted to do, so tried to take down the whole edifice without first checking if any of it was doing anything important.
Here’s the thing. All of these trends affect well educated and well-off liberals the least. We’re safe from crime in good neighbourhoods. We overwhelming wait until stable partnerships to have children. We can afford therapists and pills to help us with any mental health issues we might have; rehab to help us kick any drug habits we pick up.
Throwing off the old moral matrix has been an unalloyed good for privilege white liberals. We get to have our cake and eat it too – we have fun, take risks, but know that we have a safety net waiting to catch us should we fall.
The conservative appeal to tradition points out that our good time might be at the expense of the poor. It asks us if our hedonistic pleasures are worth a complete breakdown in stability for people with fewer advantages that us. It asks us consider sacrificing some of these pleasures so that they might be better off. I know many liberals who might find the sacrifice of some of their freedom to be a moral necessity, if framed this way.
But even here, social conservatism has the seeds of its own undoing. I can agree that children do best when brought up by loving and committed parents who give them a lot of stability (moving around in childhood is inarguablybad for manykids). Given this, the social conservative opposition to gay marriage (despite all evidence that it doesn’t mess kids up) is baffling. The sensible positon would have been “how can we use this to make marriage cool again“, not “how long can we delay this”.
This is a running pattern with social conservatism. It conserves blindly, without giving thought to what is even worth preserving. If liberals have some things wrong, that doesn’t automatically mean that the opposite is correct. It’s disturbingly easy for people on both sides of an issue to be wrong.
I’m sure Haidt would point out that this is why we have the other frameworks. But because of who I am, I’m personally much more inclined to do things in the other direction – throw out most of the past, then re-implement whatever we find to be useful but now lacking.
V – What if Liberals Listened?
In Berkeley, California, its environs, and assorted corners of the Internet, there exists a community that calls themselves “Rationalists”. This moniker is despite the fact that they agree with Haidt as to the futility of rationalism. Epistemically, they tend to be empiricists. Ethically, non-cognitivist utilitarians. Because they are largely Americans, they tend to be politically disengaged, but if you held them at gunpoint and demanded they give you a political affiliation, they would probably either say “liberal” or “libertarian”.
The rationalist community has semi-public events that mimic many of the best parts of religious events, normally based around the solstices (although I also attended a secular Seder when I visited last year).
The rationalist community has managed to do the sort of thing Haidt despaired of: create a strong community with communal morality in a secular, non-authoritarian framework. There are communal norms (although they aren’t very normal; polyamory and vegetarianism or veganism are very common). People tend to think very hard before having children and take care ensuring that any children they have will have a good extended support structure. People live in group houses, which combats atomisation.
This is also a community that is very generous. Many of the early adherents of Effective Altruism were drawn from the rationalist community. It’s likely that rationalists donate to charity in amounts more similar to Mormons than atheists (with the added benefit of almost all of this money going to saving lives, rather than proselytizing).
No community is perfect. This is a community made up of people. It has its fair share of foibles and megalomanias, bad actors and jerks. But it represents something of a counterpoint to Haidt’s arguments about the “deficiency” of a limited framework morality.
Furthermore, its altruism isn’t limited in scope, the way Haidt believes all communal altruism must necessarily be. Rationalists encourage each other to give to causes like malaria eradication (which mainly helps people in Africa), or AI risk (which mainly helps future people). Because there are few cost effective local opportunities to do good (for North Americans), this global focus allows for more lives to be saved or improved per dollar spent.
This is all of it, I think, the natural result of thoughtful people throwing away most cultural traditions and vestiges of traditionalist morality, then seeing what breaks and fixing those things in particular. It’s an example of what I wished for at the end of the last section applied to the real world.
VI – Is or Ought?
I hate to bring up the Hegelian dialectic, but I feel like this book fits neatly into it. We had the thesis: “morality stems from rationality” that was so popular in western political thought. Now we have the antithesis: “morality and rationality are separate horses, with rationality subordinate – and this is right and proper”.
I can’t wait for someone other than Haidt to a write a synthesis; a view that rejects rationalism as the basis of human morality but grapples with the fact that we yearn for perfection.
Haidt, in the words of Joseph Heath, thinks that moral discourse is “essentially confabulatory”, consisting only of made up stories that justify our moral impulses. There may be many ways in which this is true, but it doesn’t account for the fact that some people read Peter Singer’s essay “Famine, Affluence, and Morality” and go donate much of their money to the global poor. It doesn’t account for all those who have listened to the Sermon on the Mount and then abandoned their possessions to live a monastic life.
I don’t care whether you believe in The Absolute, or God, or Allah, or The Cycle of Rebirth, or the World Soul, or The Truth, or nothing at all. You probably have felt that very human yearning to be better. To do better. You’ve probably believed that there is a Good and it can perhaps be comprehended and reached. Maybe this is the last vestiges of my atrophied sanctity foundation talking, but there’s something base about believing that morality is solely a happy accident of how we evolved.
The is/ought fallacy occurs when we take what “is” and decide it is what “ought” to be. If you observe that murder is part of the natural order and conclude that it is therefore moral, you have committed this fallacy.
Haidt has observed the instincts that build towards human morality. His contributions to this field have helped make many things clear and make many conflicts more understandable. But in deciding that these natural tastes are the be-all and end-all of human morality, by putting them ahead of reason, religion, and every philosophical tradition, he has committed this fundamental error.
At the start of the Righteous Mind, Haidt approvingly mentions those scientists who once thought that ethics could be taken away from philosophers and studied instead only by them.
But science can only ever tell us what is, never what ought to be. As a book about science, The Righteous Mind is a success. But as a work on ethics, as an expression of how we ought to behave, it is an abysmal failure.
In this area, the philosophers deserve to keep their monopoly a little longer.
Fittingly enough, The Second Shift is the second book I’ve read by the famed sociologist Professor Arlie Russel Hochschild. It’s a book about the second working shift – the one that starts when people, especially parents, come home from work and find themselves confronted with a mound of chores.
I really liked this book. It’s one of the most interesting things I’ve read this year and I’ve regaled everyone who will listen with facts from it for the past few weeks. Now I’m taking that regaling online. I’m not going to do a full summary of it because I think a lot of its ideas have entered the cultural consciousness; it’s well known that women continue to do the majority of work at home and have less time for leisure than men and this popular comic about mental load summarizes that section of the book better than I ever could.
But even still, there’s lots of interesting anecdotes and figures to share.
(A quick note: This book focused on heterosexual couples because gay couples are much better at sharing the second shift. I’m going to use gendered language for partners that assumes heterosexual relationships throughout this post because this book talked about a problem of heterosexual relationships; specifically, it talked about a problem with how men act in heterosexual relationships.)
“Transitional” men are worse than traditional men
Professor Hochschild identifies three types of men. There are the traditional men, who believe in traditional gender roles and separate spheres for the sexes. These are the men who’d prefer to earn the money while their wives keep the house. Then there are the egalitarian men – the men who believe that men and women are equal and try (with varying amounts of success) to transfer this political principle to their personal relationships.
Then there are the transitional men. These men aren’t against women having careers, per se (like traditional men might be). Transitional man accept that women can be part of the workforce and often welcome the extra paycheque. Unfortunately, transitional men haven’t bought all the way into equality. They also believe that women should be in charge and do most of the work at home.
Transitional men were the worst sharers of chores. Seventy percent of egalitarian men shared the chores entirely. The rest did between 30% and 45% of them, an amount Professor Hochschild labelled “moderate” (none did less than 30% of the chores, labelled as “little”). Of traditional men, 22% shared entirely and 33% did little (with the balance doing a moderate amount). The transitionals? 3% shared, 10% did a moderate amount and a full 87% did little.
This seems to be because transitional men expect women to deal with a lack of time by cutting back at work. The transitional men profiled in the book tended to be emotionally supportive of the women in their lives who were caught between work and home, but most refused to support their partners by actually helping out more.
People talk about women wanting to “have it all”, with a career and motherhood. But if anyone should be accused of wanting to “have it all”, it’s these men. They wanted the extra spending money their wife brought in with her job, but weren’t prepared to support her in the chores at home. To these men, their wife being able to work was contingent on her first completing her “more important” duties in the home.
Working more can be a way to escape chores
Some couples try and have the same amount of leisure time, rather than do the same amount of chores. This allows them to balance things out if one partner works more. It also can set up bad incentives. Some of the men in this book used their long hours (and high salaries) as an excuse not to do chores at home.
When Prof. Hochschild looked at these men more closely, she discovered that they enjoyed their jobs much more than they enjoyed doing chores. It wasn’t that the jobs didn’t leave them drained – they certainly weren’t faking their need to flop down in front of the TV at the end of a day – but despite that, these men wouldn’t have chosen helping out with chores over being drained. They found work fulfilling, while chores were just a boring obligation.
The negative impacts of overtime work seem to pop up in afewstudies. There’s no good reason (beyond signalling your dedication to your job) to work more than forty hours a week long-term. You simply can’t get anything more done. It’s better for your relationship (and your health!) to take some of the extra overtime you might do and spend it at home helping with chores.
Not everyone has the freedom to bring this up at work and not everyone enjoys their work. You might be stuck in a job you don’t like, a job that demands a lot of overtime to prove that you’re serious and this overtime might take a toll on you (like studies suggest it does). If you leaving that job isn’t feasible, you don’t like your job, and your partner works fewer hours or enjoys their job more, then it probably is fair for your partner to take on more of the housework. In all other cases, you probably shouldn’t use working longer hours as an excuse to do less of the housework, at least not if equality is important in your partnership.
This also applies to personal projects, even if they might increase your employability, bring in a bit of extra cash, or bring value to your community. If you’re an aspiring author and spend an hour writing each night, this shouldn’t entitle you to any lesser share of the chores. If you’re studying a subject you enjoy, you shouldn’t use night class as an excuse to shirk housework. And volunteering, while laudable, is an activity that you do. It shouldn’t entitle you to a pass on chores.
The most distressing tale (to me) in the whole book was the story of Nancy and Evan Holt. Nancy was an ardent feminist and egalitarian, while Evan was a transitional. Evan was happy that Nancy liked her job, but thought that the home should be primarily her responsibility. Nancy wanted Evan to share the second shift.
They clashed over this mismatch for years. Here’s what happened when Nancy tried to get Evan to share the cooking:
Nancy said the first week of the new plan went as follows. On Monday, she cooked. For Tuesday, Evan planned a meal that required shopping for a few ingredients, but on his way home he forgot to shop for them. He came home, saw nothing he could use in the refrigerator or in the cupboard, and suggested to Nancy that they go out for Chinese food. On Wednesday, Nancy cooked. On Thursday morning, Nancy reminded Evan, “Tonight it’s your turn.” That night Evan fixed hamburgers and french fries and Nancy was quick to praise him. On Friday, Nancy cooked. On Saturday, Evan forgot again.
As this pattern continued, Nancy’s reminders became sharper. The sharper they became, the more actively Evan forgot—perhaps anticipating even sharper reprimands if he resisted more directly. This cycle of passive refusal followed by disappointment and anger gradually tightened, and before long the struggle had spread to the task of doing the laundry.
Evan kept up his passive resistance for years and eventually Nancy cut back her hours at work in order to have more time for the second shift. But this was never framed as a capitulation. Instead, it coincided with the family myth that they were sharing the chores.
How? Well, they’d ‘split the house in half’. Nancy took the upstairs (cooking, cleaning, the majority of childcare) and Evan took the downstairs (fixing the car, dealing with the yard, and maintaining the house). For all that this apparently represented an even split, it wasn’t. Not only did Evan spend less time doing chores than Nancy, the chores he did gave him more freedom. It’s much easier to put off mowing the yard or some bit of home maintenance than it is to put off picking up your kid from daycare or cooking a meal.
The myth of the work being split in half allowed Nancy to feel like she hadn’t capitulated on her feminist principles, even though she had. From a certain point of view, the family myth was a useful fiction – it probably saved Nancy and Evan’s marriage. But it opened my eyes to the very real danger of allowing a convenient myth to become an unquestioned truth. It reminded me to be careful of any convenient myths and to favour data (e.g. directly comparing how much time my partner and I spend doing chores) over stories when deciding if things are fair.
Passive avoidance and making do with less
Another tactic favoured by men like Evan Holt who have little interest in helping with the second shift requires a combination of passive avoidance and making do with less. We saw the first half of this above. It was the strategy Evan used to get out of cooking. By forgetting the ingredients, he got out of the chore.
Passive avoidance allows for lazy partners to avoid chores they don’t want to do without having to have a conversation about why they’re avoiding them or if it is fair for them to. It was much easier for Evan to be berated for forgetting (a common human frailty) than for not wanting to split chores fairly, which Nancy might have taken (correctly?) to imply something about how much Evan cared about her.
On its own, this was a moderately effective way of getting out of work. To be truly effective, it had to be paired with making do with less. In the book, men who wanted their wives to do more of the cleaning claimed that their wife wanted things too clean; if it was just them, they’d clean much less often. Men who wanted to get out of cooking claimed that takeout was good enough for them. Men who were too lazy to help their wives shop for furniture claimed that they were perfectly happy in a bare house. Men who wished to get out of childcare said they were coddling the child too much and that their children should learn to be more independent.
By passively avoiding chores and then loudly claiming that the whole chore was unnecessary, men made their wives feel like asking for their help was an unreasonable imposition.
In The Second Shift, this was a highly gendered interaction. There were no women claiming that their husbands’ standards of cleanliness were too exacting. And while there’s no reason that this has to always be gendered, I suspect that as long as women are raised with more knowledge of chores (and expectations that they will be the ones to do them), this trend will continue.
The thing I find particularly unfortunate about this tactic is that it sets up a race to the bottom. Having the chores go to whomever cares the most sets up a terrible system of competitive insouciance.
While I acknowledge that it certainly is possible for partners to have very real differences in their desired level of cleanliness or in their desired calibre of meal preparation, I think it makes sense to have a strong habit of discounting those, so as to ensure a good incentive structure. As long as each partner has even one thing they care about more than the other, it should be possible for them to cultivate empathy and avoid the insidious temptation to put off chores by making do with less.
Not all chores are created equal
Even when men were splitting the chores evenly, this didn’t always translate to less work for their partners. The illustrative example here was Greg and Carol Alston. Both spent about the same amount of time working on tasks around the house, but this was driven in part by Greg taking on a variety of home improvement tasks.
Had Greg not done those, the family’s daily situation would have been the exact same. That’s not to say that this work at home wasn’t benefiting the family. It was increasing the resale value of their house and making their long-held dream of a move to the mountains and part-time work that much closer to fruition.
The Second Shift opened my eyes to the reality that some chores must get done in a household and it’s these chores on which I now want to judge sharing the second shift. It’s only these disruptive daily chores that can’t be set aside for something more important.
If Greg was exhausted, or sick, he could easily work less on the kitchen cabinets and make it up when he felt better. Carol had no such luck with her chores. Their daughter had to get fed and bathed regardless of how Carol felt.
Greg somewhat redeemed this imbalance by being entirely willing to help out with the daily chores when Carol needed him. If she was sick, he undoubtedly would have stepped in to help. This still left the burden of managing those daily chores and making sure they got done to Carol, but it offered her some buffer.
What chores are daily necessities will probably vary from couple to couple. If you and your partner are habitually neat but bad at cooking, you might decide that it is important that the house is tidied up daily, but you won’t mind if meals come from takeout.
In discussions with your partner about the second shift, it seems especially worthwhile to determine which chores you and your partner consider absolutely mandatory and ensure that in addition to balancing chores in general, you are approximately balanced here. Otherwise, the chores you do might not be lightening the load on your partner at all.
Despite that fact that Greg’s carpentry projects didn’t really reduce the burden on her, Carol was happy that he was doing them. For one, Greg treated her as someone with important opinions. He may have planned the projects, but he actively sought out and valued her input. In addition, by doing this, Greg was helping make one of Carol’s lifelong dreams a reality. Carol was grateful for the work that Greg was doing around the house.
Reading The Second Shift, it struck me how gratitude was the most important factor in how couples felt about how they split the chores. When one partner expected gratitude, but didn’t receive it, they felt a lot of resentment towards the other. Conversely, relationships were strengthened when one of the partners felt grateful for the things the other did by default.
This showed up in surprising places. When Nina Tanagawa started making more money than her husband Peter, he expected her to grateful that he was willing to accept it. On the other hand, when Ann Myerson started earning more than her husband Robert, he was ecstatic. He’s quoted as saying “[w]hen my wife started to earn more than I did, I thought I’d struck gold.” When furniture arrived, he was the one who waited for it, because it just made sense to him that the person making less money should take the time off work. His wife was reciprocally grateful that he wanted her to have a career and didn’t care if she made more than him. The existence of men like Peter made Ann grateful for Robert.
The worst situation was when one partner expected gratitude for something the other took for granted. When Jessica Stein cut back on work after the birth of her children, her husband Seth treated it like the natural order of the world. To Jessica, it stung. It wasn’t how she’d seen her life going. She’d thought that their careers would be treated as equally important. She expected gratitude (and perhaps equal sacrifices from Seth) in response to her sacrifice.
Seth’s “sacrifice” was working long hours for a large salary. But this wasn’t the sacrifice Jessica wanted of him. She wanted him to be present and helpful. Because of this mismatch, Jessica ended up withdrawing from her marriage and children. She spent the weekends in Seattle (she lived in the San Francisco bay area), with her old college friends. Professor Hochschild described the couple as “divorced in spirit”.
It’s all in the culture
So much of what drove gratitude was cultural. Nina felt grateful that her husband “tolerated” her higher salary because when she looked around at the other women she knew, she saw many of them married to men who wouldn’t have “tolerated” their wife making more than them.
Many of the men in Professor Hochschild’s study almost shared the second shift. They did something like 40% of the tasks around the home and with the kids. Interestingly, the wives of these men often felt like they shared (even though the men were likely to say that their wives did more). This became a sort of family myth of its own, that these men entirely shared, instead of almost entirely shared. Professor Hochschild suggests that this myth arose because when compared to other husbands, these men did so much more.
Who won conflicts about the second shift was often determined by the broader patterns of culture as well. If a husband did much more housework than the average (or was more willing to “tolerate” his wife working), then his wife was much less likely to be successful in causing him to contribute more. When compared against the reference class of “society”, many men did quite well, even though they were objectively lazy when compared to their wife.
This is a pattern I’ve observed in many relationship negotiations (both in my own life and in stories told by friends). It’s really hard to get the partner who is more willing to leave the relationship to do something they don’t want to do. In relationships that aren’t abusive or manipulative, people only do the things they freely choose. They obviously won’t freely choose to do anything that they like less than breaking up. But the very fact that breaking up will hurt them less than their partner makes it very hard for their partner to feel like they can push for changes.
In one of the two profiled couples who actually shared the second shift equally (Adrienne and Michael Sherman), their equality was brought about because Adrienne actually left Michael after his refusal to share the second shift and his insistence that his career come first. After two months, Michael called Adrienne and told her that he’d share. He loved her and didn’t feel like he could love anyone else as deeply as he loved her. She came back and they shared the housework and raising the kids. Michael surprised himself by how much he enjoyed it. He became the best father he knew and he took pride in this. But none of this would have been possible if Adrienne hadn’t been willing to leave.
While the division of the second shift is ostensibly an agreement among individuals, I don’t think the overarching problem is best addressed individually. As long as women feel like they’re getting a good deal when men almost do their fair share, many men won’t do any more. Policies – like extended, non-transferable parental leave after the birth of a child – that encourage men to spend time at home sharing the second shift are a necessary component of ending this gendered divide.
I recently read The Singularity is Near as part of a book club and figured a few other people might benefit from hearing what I got out of it.
First – it was a useful book. I shed a lot of my skepticism of the singularity as I read it. My mindset has shifted from “a lot of this seems impossible” to “some of this seems impossible, but a lot of it is just incredibly hard engineering”. But that’s because I stuck with it – something that probably wouldn’t have happened without the structure of a book club.
I’m not sure Kurzweil is actually the right author for this message. Accelerando (by Charles Stross) covered much of the same material as Singularity, while being incredibly engaging. Kurzweil’s writing is technically fine – he can string a sentence together and he’s clear – but incredibly repetitious. If you read the introduction, the introduction of each chapter, all of Chapter 4 (in my opinion, the only consistently good part of the book proper), and his included responses to critics (the only other interesting part of the whole tome) you’ll get all the worthwhile content, while saving yourself a good ten hours of hearing the same thing over and over and over again. Control-C/Control-V may have been a cheap way for Kurzweil to pad his word count, but it’s expensive to the reader.
I have three other worries about Kurzweil as a futurist. One deals with his understanding of some of the more technical aspects of what he’s talking about, especially physics. Here’s a verbatim quote from Singularity about nuclear weapons:
Alfred Nobel discovered dynamite by probing chemical interactions of molecules. The atomic bomb, which is tens of thousands of times more powerful than dynamite, is based on nuclear interactions involving large atoms, which are much smaller scales of matter than large molecules. The hydrogen bomb, which is thousands of times more powerful than an atomic bomb, is based on interactions involving an even smaller scale: small atoms. Although this insight does not necessarily imply the existence of yet more powerful destructive chain reactions by manipulating subatomic particles, it does make the conjecture [that we can make more powerful weapons using sub-atomics physics] plausible.
This is false on several levels. First, uranium and plutonium (the fissile isotopes used in atomic bombs) are both more massive (in the sense that they contain more matter) than the nitroglycerine in dynamite. Even if fissile isotopes are smaller in one dimension, they are on the same scale as the molecules that make up high explosives. Second, the larger energy output from hydrogen bombs has nothing to do with the relative size of hydrogen vs. uranium. Long time readers will know that the majority of the destructive output of a hydrogen bomb actually comes from fission of the uranium outer shell. Hydrogen bombs (more accurately thermonuclear weapons) derive their immense power from a complicated multi-step process that liberates a lot of energy from the nuclei of atoms.
Kurzweil falling for this plausible (but entirely incorrect) explanation doesn’t speak well of his ability to correctly pick apart the plausible and true from the plausible and false in fields he is unfamiliar with. But it’s this very picking apart that is so critical for someone who wants to undertake such a general survey of science.
My second qualm emerges when Kurzweil talks about AI safety. Or rather, it arises from the lack of any substantive discussion of AI safety in a book about the singularity. As near as I can tell, Kurzweil believes that AI will emerge naturally from attempts to functionally reverse engineer the human brain. Kurzweil believes that because this AI will be essentially human, there will be no problems with value alignment.
This seems very different from the Bostromian paradigm of dangerously misaligned AI: AI with ostensibly benign goals that turn out to be inimical to human life when taken to their logical conclusion. The most common example I’ve heard for this paradigm is an industrial AI tasked with maximizing paper clip production that tiles the entire solar system with paper clips.
Kurzweil is so convinced that the first AI will be based on reverse engineering the brain that he doesn’t adequately grapple with the orthogonality thesis: the observation that intelligence and comprehensible (to humans) goals don’t need to be correlated. I see no reason to believe Kurzweil that the first super-intelligence will be based off a human. I think to believe that it would be based on a human, you’d have to assume that various university research projects will beat Google and Facebook (who aren’t trying to recreate functional human brains in silica) in the race to develop a general AI. I think that is somewhat unrealistic, especially if there are paths to general intelligence that look quite different from our brains.
Finally, I’m unhappy with how Kurzweil’s predictions are sprinkled throughout the book, vague, and don’t include confidence intervals. The only clear prediction I was able to find was Kurzweil’s infamously false assertion that by ~2010, our computers would be split up and worn with our clothing.
It would be much easier to assess Kurzweil’s accuracy as a predictor if he listed all of his predictions together in a single section, applied to them clear target dates (e.g. less vague than: “in the late 2020s”), and gave his credence (as it stands, it is hard to distinguish between things Kurzweil believes are very likely and things he views as only somewhat likely). Currently any attempts to assess Kurzweil’s accuracy are very sensitive to what you choose to view as “a prediction” and how you interpret his timing. More clarity would make this unambiguous.
Furthermore, we’ve already began to bump up against the limit on clock speed in silicon; we can’t really run silicon chips at higher clock rates without melting them. This is unfortunate, because speed ups in clock time are much nicer than increased parallelism. Almost all programs benefit from quicker processing, while only certain programs benefit from increased parallelism. This isn’t an insurmountable obstacle when it comes to things like artificial intelligence (the human brain has a very slow clock speed and massive parallelism and it’s obviously good enough to get lots done), but it does mean that some things that Kurzweil were counting on to get quicker and quicker have stalled out (the book was written just as the Dennard Scaling began to break down).
All this means that the exponential growth that is supposed to drive the singularity is about to fizzle out… maybe. Kurzweil is convinced that the slowdown in silicon will necessarily lead to a paradigm shift to something else. But I’m not sure what it will be. He talks a bit about graphene, but when I was doing my degree in nanotechnology engineering, the joke among the professors was that graphene could do anything… except make it out of the lab.
Kurzweil has an almost religious faith that there will be another paradigm shift, keeping his exponential trend going strong. And I want to be really clear that I’m not saying there won’t be. I’m just saying there might not be. There is no law of the universe that says that we have to have convenient paradigm shifts. We could get stuck with linear (or even logarithmic) incremental improvements for years, decades, or even centuries before we resume exponential growth in computing power.
It does seem like ardent belief in the singularity might attract more religiously minded atheists. Kurzweil himself believes that it is our natural destiny to turn the whole universe into computational substrate. Identifying god with the most holy and perfect (in fine medieval tradition; there’s something reminiscent of Anselm in Kurzweil’s arguments), Kurzweil believes that once every atom in the universe sings with computation, we will have created god.
I don’t believe that humanity has any grand destiny, or that the arc of history bends towards anything at all in particular. And I by no means believe that the singularity is assured, technologically or socially. But it is a beautiful vision. Human flourishing, out to the very edges of the cosmos…
Yeah, I want that too. I’m a religiously minded atheist, after all.
In both disposition and beliefs, I’m far closer to Kurzweil than his many detractors. I think “degrowth” is an insane policy that if followed, would create scores of populist demagogues. I think that the Chinese room argument is good only for identifying people who don’t think systemically. I’m also more or less in agreement that government regulations won’t be able to stop a singularity (if one is going to occur because of continuing smooth acceleration in the price performance of information technology; regulation could catch up if a slowdown between paradigm shifts gives it enough time).
I think the singularity very well might happen. And at the end of the day, the only real difference between me and Kurzweil is that “might”.
Foreword: November 8th was one of the worst nights of my life, in a way that might have bled through – just a bit, mind you – into this review. My position will probably mellow as the memories of my fear and disappointment fade.
My latest non-fiction read was Shattered: Inside Hillary Clinton’s Doomed Campaign. In addition to making me consider a career in political consultancy, it gave me a welcome insight into some of the fascinating choices the Clinton campaign made during the election.
I really do believe this book was going to rip on the campaign no matter the outcome. Had Clinton won, the thesis would have been “the race was closer than it needed to be”, not “Clinton’s campaign was brilliant”.
Despite that, I should give the classic disclaimer: I could be wrong about the authors; it’s entirely possible that they’d have extolled the brilliance of Clinton had she won. It’s also true that Clinton almost won and if she had, she would have captured the presidency in an extremely cost-effective way.
But almost only counts in horseshoes and hand grenades and an election is neither. Clinton lost. The 11th hour letter from Comey to congress and Russian hacking may have tipped her over, but ultimately it was the decisions of her campaign that allowed Donald Trump to be within spitting distance of her at all.
Shattered lays a lot of blame for those bad decisions in the lap of Robby Mook, Clinton’s campaign manager. Throughout the book, he’s portrayed as dogmatically obsessed with data, refusing to do anything that doesn’t come up as optimal in his models. It was Mook who refused to do polling (because he thought his analytics provided almost the same information at a fraction of the cost), Mook who refused to condone any attempts at persuading undecided or weak Trump voters to back Clinton, Mook who consistently denied resources to swing state team leads, and Mook who responded to Bill Clinton’s worries about anti-establishment sentiment and white anger with “the data run counter to your anecdotes”.
We now have a bit more context in which to view Mook’s “data” and Bill’s “anecdotes”.
I’m a committed empiricist, but Mook’s “data driven” approach made me repeatedly wince. Anything that couldn’t be measured was discounted as unimportant. Anything that wasn’t optimal was forbidden. And any external validation of models – say via polls – was vetoed because Mook didn’t want to “waste” money validating models he was so confident in.
Mook treated the election as a simple optimization problem – he thought he knew how many votes or how much turnout was associated with every decision he could make, and he assumed that if he fed all this into computers, he’d get the definitive solution to the election.
The problem here is that elections remain unsolved. There doesn’t exist an equation that lets you win an election. There’s too many factors and too many unknowns and you aren’t acting in a vacuum. You have an opponent who is actively countering you. And it should go almost without saying that an optimal solution to an election is only possible if the solution can be kept secret. If your opponent knows your solution, they will find a way to counter it.
Given that elections are intractable as simple optimization problems, a smart campaign will rely on experienced humans to make major decisions. Certainly, these humans should be armed with the best algorithms, projections, data, and cost-benefit analyses that a campaign can supply. But to my (outsider) eyes, it seems absolutely unconscionable to cut out the human element and ignore all of the accumulated experience a campaign brain trust can bring to bear on an election. Clinton didn’t lack for a brain trust, but her brain trust certainly lacked for opportunities to make decisions.
Not all the blame can rest on Mook though. The campaign ultimately comes down to a candidate and quite frankly, there were myriad ways in which Clinton wasn’t that great of a candidate.
First: vision. She didn’t have one. Clinton felt at home in policy, so her campaign had a lot of it. She treated the election like a contest to create policy that would apply to the rational self-interest of a winning coalition of voters. Trump tried to create a story that would appeal to the self-conception of a winning coalition of voters.
I don’t think one is necessarily superior to the other, but I’ve noticed that charismatic and generally liked leaders (Trudeau, Macron, Obama if we count his relatively high approval ratings at the end of his presidency) manage to combine both. Clinton was the “establishment” candidate, the candidate that was supposed to be good at elections. She had every opportunity to learn to use both tools. But she only ever used one, depriving her of a critical weapon against her opponent. In this way, she was a lot like Romney.
(Can you imagine Clinton vs. Romney? That would have been high comedy right there.)
After vision comes baggage. Clinton had a whole mule train of it. Her emails, her speeches, her work for the Clinton foundation – there were plenty of time bombs there. I know the standard progressive talking point is that Clinton had baggage because a woman had to be in politics as long as she did before she would be allowed to run for the presidency. And if her baggage was back room deals with foreign despots or senate subcommittees (the two generally differ only in the lavishness of the receptions they throw, not their moral character) that explanation would be all well and good.
But Clinton used a private email server because she didn’t want the laws on communication disclosures apply to her. She gave paid speeches and hid the transcripts because she felt entitled to hundreds of thousands of dollars and (apparently) thought she could take the money and then remain impartial.
Both of these unforced errors showed poor judgement and entitlement. They weren’t banal expressions of the compromises people need to make to govern. They showed real contempt for the electorate, in that they sought to deny voters a chance to hold Clinton accountable for what she said, both as the nation’s top diplomat and as (perhaps only briefly) its most exorbitantly compensated public speaker.
As she was hiding things, I doubt Clinton explicitly thought “fuck the voters, I don’t care what they think”, it was instead probably “damned if I’m giving everyone more ammunition to get really angry about”. Unfortunately, the second isn’t benign in a democracy, where responsible government first and foremost requires politicians to be responsible to voters for all of their beliefs and actions, even the ones they’d rather keep out of the public eye. To allow any excuse at all to be used to escape from responsible government undermines the very idea of it.
As a personal note, I think it was stupid of Clinton to be so contemptuous because it made her long-term goals more difficult, but I also think her contempt was understandable in light of the fact that she’s waded through more bullshit in the service of her country than any five other politicians combined. Politicians are humans and make mistakes and it’s possible to understand and sympathize with the ways those mistakes come from human frailty while also condemning the near-term effects (lost elections) and long-term effects (decreased trust in democratic institutions) of bad decisions.
The final factor that Clinton deserves blame for is her terrible management style. When talking about management, Peter Thiel opined that only a sociopath would give two people the same job. If this is true – I’m inclined to trust him under the principle that it takes one to know one – Clinton is a sociopath. There was no clear chain of command for the campaign. At every turn, people could see their work undone by well-connected “Clinton World” insiders. The biggest miracle is that the members of the campaign managed to largely keep this on the down-low.
Clinton made much of Obama’s 2008 “drama free” campaign. She wanted her 2016 campaign to run the same way. But instead of adopting the management habits that Obama used to engender loyalty, she decided that the differences lay everywhere but in the candidates; if only she had better, more loyal people working for her, she’d have the drama free campaign she desired. And so, she cleaned house, started fresh, and demanded that there would be no drama. As far as the media was concerned, there wasn’t. But under the surface, things were brutal.
Mook hid information from pretty much everyone because his position felt precarious. No one told Abedin anything because they knew she’d tell it right to Clinton, especially if it wasn’t complementary. Everyone was scared that their colleagues would stab them in the back to prove their loyalty to Clinton. Employees who failed were stripped of almost all responsibilities, but never fired. In 2008, fired employees ‘took the axes they had to grind, sharpened them, and jammed them in Clinton’s back during media interviews’. Clinton learned lessons from that, but I’m not sure if they were the right ones.
I’m not sure how much of this was text and how much was subtext, but I emerged from Shattered feeling that the blame for losing the election can’t stop with the Clinton camp. There’s also Bernie Sanders. I don’t think anyone can blame him for talking about emails and speeches, but I’ve come to believe that the chip on his shoulder about the unfairness of the primary was way out of line; if anyone in the Democratic Party beat Clinton on a sense of entitlement, it was Sanders.
Politics is a team sport. You can’t accomplish anything alone, so you have to rely on other people. Clinton (whatever her flaws) was reliable. She fought and she bled and she suffered for the Democratic Party. Insofar as anyone has ever been owed a nomination, Clinton was owed this one.
Sanders hadn’t even fundraised for the party. And he expected them not to do whatever they could for Clinton? Why? He was an outsider trying to hijack their institution. His complaints would have been fair from a Democrat, but from an independent socialist?
On the Republican side, Trump had the same thing going on (and presumably would have been equally damaging to another nominee had he lost). In both cases, the party owed them nothing. It was childish of Bernie to go on like the party was supposed to be impartial.
(Also, in what meaningful ways vis a vis ability to hire staff and coordinate policy would you expect a Sanders White House to be different from the Trump White House? If you didn’t answer “none”, then you have some serious thinking to do.)
You’d think the effect of all of this would be for me to feel contempt for the Democratic Party in general and Clinton in particular. But aside from Sanders, I came out of it feeling really sorry for everyone involved.
I felt sorry for Debbie Wasserman Schultz. Sanders’ inflammatory rhetoric necessitated throwing her under the bus right before the convention. She didn’t take it gracefully, but then, how could she? She’d flown her whole family from Florida to Philadelphia to see her moment of triumph as Chairwoman of the DNC speaking at the Democratic National Convention and had it all taken away from her so that Sanders’ supporters wouldn’t riot (and apparently it was still a near thing). She spent the better part of the day negotiating her exit with the Clinton campaign’s COO, instead of appearing on the stage like she’d hoped to. The DNC ended up footing the bill for flying her family home.
I felt sorry for Mook. He had a hard job and less power and budget than were necessary to do it well. He trusted his models too much, but this is partially because he was really good with them. Mook’s math made it almost impossible for Sanders to win. Clinton had been terrible at delegate math in 2008. Mook redeemed that. To give just one example of his brilliance, he prioritized media spending in districts with an odd number of delegates, which meant that Clinton won an outside number of delegates from her wins and losses .
I felt sorry for the whole Clinton campaign. Things went so wrong, so often that they had a saying: “we don’t get to have nice things”. Media ignores four Clinton victories to focus on one of Sanders’? “We don’t get to have nice things”. Trump goes off the rails, but it gets overshadowed by the ancient story about emails? “We don’t get to have nice things.”
Several members of the campaign had their emails hacked (probably by the Russians). Instead of reporting on the Russian interference and Russian ties to the Trump campaign, the media talked about those emails over and over again in the last month of the election . That must have been maddening for the candidate and her team.
Even despite that, I felt sorry for the press, who by and large didn’t want Trump to win, but were forced by a string of terrible incentives to consistently cover Clinton in an exceedingly damning way. If you want to see Moloch‘s hand at work, look no further than reporting on the 2016 election.
But most of all, I felt sorry for Clinton. Here was a woman who had spent her whole adult life in politics, largely motivated by a desire to help women and children (causes she’d been largely successful at). As Secretary of State, she flew 956,733 miles (equivalent to two round trips to the moon) and visited 112 countries. She lost two races for the presidency. And it must have been so crushing to have bled and fought and given so much, to think she’d finally succeeded, then to have it all taken away from her by Donald Trump.
Yet, she conceded anyway. She was crushed, but she ensured that America’s legacy of peaceful transfers of power would continue.
November 8th may have been one of the worst nights of my life. But I’m not self-absorbed enough to think my night was even remotely as bad as Clinton’s. Clinton survived the worst the world could do to her and is still breathing and still trying to figure out what to do next. If her campaign gave me little to admire, that makes up a good bit of the gap.
I really recommend Shattered for anyone who wants to see just how off the rails a political campaign can go when it’s buffeted by a combination of candidate ineptitude, unclear chains of command, and persistent attacks from a foreign adversary. It’s a bit repetitious at times, which was sometimes annoying and sometimes helpful (especially when I’d forgotten who was who), but otherwise grippingly and accessibly written. The fascinating subject matter more than makes up for any small burrs in the delivery.
 In a district that has an odd number of delegates, winning by a single vote meant an extra delegate. In a district with 6 delegates, you’d get 3 delegates if you won between 50% and 67% of the votes. In a district with 7, you’d get 4 if you won by even a single vote, and five once you surpassed 71%. If a state has ten counties, four with seven delegates and six with six delegates, you can win the state by four delegates if you squeak to a win in the four districts with seven delegates and win at least 34% of the vote in each of the others. In practice, statewide delegates prevent such wonky scenarios except when the vote is really close, but this sort of math remains vital to winning a close race. ^
 WikiLeaks released the hacked emails a few hundred a day for the last month of the election, starting right after the release of Trump’s “grab her by the pussy” video. This steady drip-drip-drip of bad press was very damaging for the Clinton campaign, especially because many people didn’t differentiate this from the other Clinton-email story.
The author is one Sir Bernard Williams. According to his Wikipedia, he was a particularly humanistic philosopher in the old Greek mode. He was skeptical of attempts to build an analytical foundation for moral philosophy and of his own prowess in arguments. It seems that he had something pithy or cutting to say about everything, which made him notably cautious of pithy or clever answers. He’s also described as a proto-feminist, although you wouldn’t know it from his writing.
Williams didn’t write his essay out of a rationalist desire to disprove utilitarianism with pure reason (a concept he seemed every bit as sceptical of as Smart was). Instead, Williams wrote this essay because he agrees with Smart that utilitarianism is a “distinctive way of looking at human action and morality”. It’s just that unlike Smart, Williams finds the specific distinctive perspective of utilitarianism often horrible.
Smart anticipated this sort of reaction to his essay. He himself despaired of finding a single ethical system that could please anyone, or even please a single person in all their varied moods.
One of the very first things I noticed in Williams’ essay was the challenge of attacking utilitarianism on its own terms. To convince a principled utilitarian that utilitarianism is a poor choice of ethical system, it is almost always necessary to appeal to the consequences of utilitarianism. This forces any critic to frame their arguments a certain way, a way which might feel unnatural. Or repugnant.
Williams begins his essay proper with (appropriately) a discussion of consequences. He points out that it is difficult to hold actions as valuable purely by their consequences because this forces us to draw arbitrary lines in time and declare the state of the world at that time the “consequences”. After all, consequences continue to unfold forever (or at least, until the heat death of the universe). To have anything to talk about at all Williams decides that it is not quite consequences that consequentialism cares about, but states of affairs.
Utilitarianism is the form of consequentialism that has happiness as its sole important value and seeks to bring about the state of affairs with the most happiness. I like how Williams undid the begging the question that utilitarianism commonly does. He essentially asks ‘why should happiness be the only thing we treat as intrinsically valuable?’ Williams mercifully didn’t drive this home, but I was still left with uncomfortable questions for myself.
Instead he moves on to his first deep observation. You see, if consequentialism was just about valuing certain states of affairs more than others, you could call deontology a form of consequentialism that held that duty was the only intrinsically valuable thing. But that can’t be right, because deontology is clearly different from consequentialism. The distinction, that Williams suggests is that consequentialists discount the possibility of actions holding any inherent moral weight. For a consequentialist, an action is right because it brings about a better state of affairs. For non-consequentialists, a state of affairs can be better – even if it contains less total happiness or integrity or whatever they care about than a counterfactual state of affairs given a different action – because the right action was taken.
A deontologist would say that it is right for someone to do their duty in a way that ends up publically and spectacularly tragic, such that it turns a thousand people off of doing their own duty. A consequentialist who viewed duty as important for the general moral health of society – who, in Smart’s terminology, viewed acting from duty as good – would disagree.
Williams points out that this very emphasis on comparing states of affairs (so natural to me) is particularly consequentialist and utilitarian. That is to say, it is not particularly meaningful for a deontologist or a virtue ethicist to compare states of affairs. Deontologists have no duty to maximize the doing of duty; if you ask a deontologist to choose between a state of affairs that has one hundred people doing their duty and another that has a thousand, it’s not clear that either state is preferable from their point of view. Sure, deontologists think people should do their duty. But duty embodied in actions is the point, not some cosmic tally of duty.
Put as a moral statement, non-consequentialists lack any obligation to bring about more of what they see as morally desirable. A consequentialist may feel both fondness for and a moral imperative to bring about a universe where more people are happy. Non- consequentialists only have the fondness.
One deontologist of my acquaintance said that trying to maximize utility felt pointless – they viewed it as morally important as having a high score on a Tetris game. We ended up starting at each other in blank incomprehension.
In Williams’ view, rejection of consequentialism doesn’t necessarily lead to deontology, though. He sums it up simply as: “all that is involved… in the denial of consequentialism, is that with respect to some type of action, there are some situations in which that would be the right thing to do, even though the state of affairs produced by one’s doing that would be worse than some other state of affairs accessible to one.”
A deontologist will claim right actions must be taken no matter the consequences, but to be non-consequentalist, an ethical system merely has to claim that some actions are right despite a variety of more or less bad consequences that might arise from them.
Or, as I wrote angrily in the margins: “ok, so not necessarily deontology, justaccepting sub-maximal global utility“. It is hard to explain to a non-utilitarian just how much this bugs me, but I’m not going to go all rationalist and claim that I have a good reason for this belief.
Williams then turns his attention to the ways in which he thinks utilitarianism’s insistency on quantifying and comparing everything is terrible. Williams believes that by refusing to categorically rule any action out (or worse, specifically trying to come up with situations in which we might do horrific things), utilitarianism encourages people – even non-utilitarians who bump into utilitarian thought experiments – to think of things in utilitarian (that is to say, explicitly comparative) terms. It seems like Williams would prefer there to be actions that are clearly ruled out, not just less likely to be justified.
I get the impression of a man almost tearing out his hair because for him, there exist actions that are wrong under all circumstances and here we are, talking about circumstances in which we’d do them. There’s a kernel of truth here too. I think there can be a sort of bravado in accepting utilitarian conclusions. Yeah, I’m tough enough that I’d kill one to save one thousand? You wouldn’t? I guess you’re just soft and old-fashioned. For someone who cares as much about virtue as I think Williams does, this must be abhorrent.
I loved how Williams summed this up.
The demand… to think the unthinkable is not an unquestionable demand of rationality, set against a cowardly or inert refusal to follow out one’s moral thoughts. Rationality he sees as a demand not merely on him, but on the situations in and about which he has to think; unless the environment reveals minimum sanity, it is insanity to carry the decorum of sanity into it.
For all that I enjoyed the phrasing, I don’t see how this changes anything; there is nothing at all sane about the current world. A life is worth something like $7 million to $9 million and yet can be saved for less than $5000. This planet contains some of the most wrenching poverty and lavish luxury imaginable, often in the very same cities. Where is the sanity? If Williams thinks sane situations are a reasonable precondition to sane action, then he should see no one on earth with a duty to act sanely.
The next topic Williams covers is responsibility. He starts by with a discussion of agent interchangeability in utilitarianism. Williams believes that utilitarianism merely requires someone do the right thing. This implies that to the utilitarian, there is no meaningful difference between me doing the utilitarian right action and you doing it, unless something about me doing it instead of you leads to a different outcome.
This utter lack of concern for who does what, as long as the right thing gets done doesn’t actually seem to absolve utilitarians of responsibility. Instead, it tends to increase it. Williams says that unlike adherents of many ethical systems, utilitarians have negative responsibilities; they are just as much responsible for the things they don’t do as they are for the things they do. If someone has to and no one else will, then you have to.
This doesn’t strike me as that unique to utilitarianism. I was raised Catholic and can attest that Catholics (who are supposed to follow a form of virtue ethics) have a notion of negative responsibility too. Every mass, as Catholics ask forgiveness before receiving the Eucharist they ask God for forgiveness for their sins, in thoughts and words, in what they have done and in what they have failed to do.
Leaving aside whether the concept of negative responsibility is uniquely utilitarian or not, Williams does see problems with it. Negative responsibility makes so much of what we do dependent on the people around us. You may wish to spend your time quietly growing vegetables, but be unable to do so because you have a particular skill – perhaps even one that you don’t really enjoy doing – that the world desperately needs. Or you may wish never to take a life, yet be confronted with a run-away trolley that can only be diverted from hitting five people by pulling the lever that makes it hit one.
This didn’t really make sense to me as a criticism until I learned that Williams deeply cares about people living authentic lives. In both the cases above, authenticity played no role in the utilitarian calculus. You must do things, perhaps things you find abhorrent, because other people have set up the world such that terrible outcomes would happen if you didn’t.
It seems that Williams might consider it a tragedy for someone feel compelled by their ethical system to do something that is inauthentic. I imagine he views this as about as much of a crying waste of human potential as I view the yearly deaths of 429,000 people due to malaria. For all my personal sympathy for him I am less than sympathetic to a view that gives these the same weight (or treats inauthenticity as the greater tragedy).
Radical authenticity requires us to ignore society. Yes, utilitarianism plops us in the middle of a web of dependencies and a buffeting sea of choices that were not ours, while demanding we make the best out of it all. But our moral philosophies surely are among the things that push us towards an authentic life. Would Williams view it as any worse that someone was pulled from her authentic way of living because she would starve otherwise?
To me, there is a certain authenticity in following your ethical system wherever it leads. I find this authenticity beautiful, but not worthy of moral consideration, except insofar as it affects happiness. Williams finds this authenticity deeply important. But by rejecting consequentialism, he has no real way to argue for more of the qualities he desires, except perhaps as a matter of aesthetics.
It seems incredibly counter-productive to me to say to people – people in the midst of a society that relentlessly pulls them away from authenticity with impersonal market forces – that they should turn away from the one ethical system that seems to have as the desired outcome a happier system. A Kantian has her duty to duty, but as long as she does that, she cares not for the system. A virtue ethicist wishes to be virtuous and authentic, but outside of her little bubble of virtue, the terrors go on unabated. It’s only the utilitarian who can holds a better society as an end into itself.
Maybe this is just me failing to grasp non-utilitarian epistemologies. It baffles me to hear “this thing is good and morally important, but it’s not like we think it’s morally important for there to be more of it; that goes too far!”. Is this a strawman? If someone could explain what Williams is getting at here in terms I can understand, I’d be most grateful.
I do think Williams misses one key thing when discussing the utilitarian response to negative responsibility. Actions should be assessed on the margin, not in isolation. That is to say, the marginal effect of someone becoming a doctor, or undertaking some other career generally considered benevolent is quite low if there are others also willing to do the job. A doctor might personally save hundreds, or even thousands of lives over her career, but her marginal impact will be saving something like 25 lives.
The reasons for this are manifold. First, when there are few doctors, they tend to concentrate on the most immediately life-threatening problems. As you add more and more doctors, they can help, but after a certain point, the supply of doctors will outstrip the demand for urgent life-saving attention. They can certainly help with other tasks, but they will each save fewer lives than the first few doctors.
Second, there is a somewhat fixed supply of doctors. Despite many, many people wishing they could be doctors, only so many can get spots in medical school. Even assuming that medical school admissions departments are perfectly competent at assessing future skill at being a doctor (and no one really believes they are), your decision to attend medical school (and your successful admission) doesn’t result in one extra doctor. It simply means that you were slightly better than the next best person (who would have been admitted if you weren’t).
Finally, when you become a doctor you don’t replace one of the worst already practising doctors. Instead, you replace a retiring doctor who is (for statistical purposes) about average for her cohort.
All of this is to say that utilitarians should judge actions on the margin, not in absolute terms. It isn’t that bad (from a utilitarian perspective) not devote all your attentions to the most effective direct work, because unless a certain project is very constrained by the number of people working on it, you shouldn’t expect to make much marginal difference. On the other hand, earning a lot of money and giving it to highly effective charities (or even a more modest commitment, like donating 10% of your income) is likely to do a huge amount of good, because most people don’t do this, so you’re replacing a person at a high paying job who was doing (from a utilitarian perspective) very little good.
Williams either isn’t familiar with this concept, or omitted it in the interest of time or space.
Williams next topic is remoter effects. A remoter effect is any effect that your actions have on the decision making of other people. For example, if you’re a politician and you lie horribly, are caught, and get re-elected by a large margin, a possible remoter effect is other politicians lying more often. With the concept of remoter effects, Williams is pointing at what I call second order utilitarianism.
Williams makes a valid point that many of the justifications from remoter effects that utilitarians make are very weak. For example, despite what some utilitarians claim, telling a white lie (or even telling any lie that is unpublicized) doesn’t meaningfully reduce the propensity of everyone in the world to tell the truth.
Williams thinks that many utilitarians get away with claiming remoter effects as justification because they tend to be used as way to make utilitarianism give the common, respectable answers to ethical dilemmas. He thinks people would be much more skeptical of remoter effects if they were ever used to argue for positions that are uncommonly held.
This point about remoter effects was, I think, a necessary precursor to Williams’ next thought experiment. He asks us to imagine a society with two groups, A and B. There are many more members of A than B. Furthermore, members of A are disgusted by the presence (or even the thought of the presence) of members of group B. In this scenario, there has to exist some level of disgust and some ratio between A and B that makes the clear utilitarian best option relocating all members of group B to a different country.
With Williams’ recent reminder that most remoter effects are weaker than we like to think still ringing in my ears, I felt fairly trapped by this dilemma. There are clear remoter effects here: you may lose the ability to advocate against this sort of ethnic cleansing in other countries. Successful, minimally condemned ethnic cleansing could even encourage copy-cats. In the real world, these are might both be valid rejoinders, but for the purposes of this thought experiment, it’s clear these could be nullified (e.g. if we assume few other societies like this one and a large direct utility gain).
The only way out that Williams sees fit to offer us is an obvious trap. What if we claimed that the feelings of group A were entirely irrational and that they should just learn to live with them? Then we wouldn’t be stuck advocating for what is essentially ethnic cleansing. But humans are not rational actors. If we were to ignore all such irrational feelings, then utilitarianism would no longer be a pragmatic ethical system that interacts with the world as it is. Instead, it would involve us interacting with the world as we wish it to be.
Furthermore, it is always a dangerous game to discount other people’s feelings as irrational. The problem with the word irrational (in the vernacular, not utilitarian sense) is that no one really agrees on what is irrational. I have an intuitive sense of what is obviously irrational. But so, alas, do you. These senses may align in some regions (e.g. we both may view it as irrational to be angry because of a belief that the government is controlled by alien lizard-people), but not necessarily in others. For example, you may view my atheism as deeply irrational. I obviously do not.
Williams continues this critique to point out that much of the discomfort that comes from considering – or actually doing – things the utilitarian way comes from our moral intuitions. While Smart and I are content to discount these feelings, Williams is horrified at the thought. To view discomfort from moral intuitions as something outside yourself, as an unpleasant and irrational emotion to be avoided, is – to Williams – akin to losing all sense of moral identity.
This strikes me as more of a problem for rationalist philosophers. If you believe that morality can be rationally determined via the correct application of pure reason, then moral intuitions must be key to that task. From a materialist point of view though, moral intuitions are evolutionary baggage, not signifiers of something deeper.
Still, Williams made me realize that this left me vulnerable to the question “what is the purpose of having morality at all if you discount the feelings that engender morality in most people?”, a question to which I’m at a loss to answer well. All I can say (tautologically) is “it would be bad if there was no morality”; I like morality and want it to keep existing, but I can’t ground it in pure reason or empiricism; no stone tablets have come from the world. Religions are replete with stone tablets and justifications for morality, but they come with metaphysical baggage that I don’t particularly want to carry. Besides, if there was a hell, utilitarians would have to destroy it.
I honestly feel like a lot of my disagreement with Williams comes from our differing positions on the intuitive/systematizing axis. Williams has an intuitive, fluid, and difficult to articulate sense of ethics that isn’t necessarily transferable or even explainable. I have a system that seems workable and like it will lead to better outcomes. But it’s a system and it does have weird, unintuitive corner cases.
Williams talks about how integrity is a key moral stance (I think motivated by his insistence on authenticity). I agree with him as to the instrumental utility of integrity (people won’t want to work with you or help you if you’re an ass or unreliable). But I can’t ascribe integrity some sort of quasi-metaphysical importance or treat it as a terminal value in itself.
In the section on integrity, Williams comes back to negative responsibility. I do really respect Williams’ ability to pepper his work with interesting philosophical observations. When talking about negative responsibility, he mentions that most moral systems acknowledge some difference between allowing an action to happen and causing it yourself.
Williams believes the moral difference between action and inaction is conceptually important, “but it is unclear, both in itself and in its moral applications, and the unclarities are of a kind which precisely cause it to give way when, in very difficult cases, weight has to be put on it”. I am jealous three times over at this line, first at the crystal-clear metaphor, second at the broadly applicable thought underlying the metaphor, and third at the precision of language with which Williams pulls it off.
(I found Williams a less consistent writer than Smart. Smart wrote his entire essay in a tone of affable explanation and managed to inject a shocking amount of simplicity into a complicated subject. Williams frequently confused me – which I feel comfortable blaming at least in part on our vastly different axioms – but he was capable of shockingly resonant turns of phrase.)
I doubt Williams would be comfortable to come down either way on inaction’s equivalence to action. To the great humanist, it must ultimately (I assume) come down to the individual humans and what they authentically believed. Williams here is scoffing at the very idea of trying to systematize this most slippery of distinctions.
For utilitarians, the absence or presence of a distinction is key to figuring out what they must do. Utilitarianism can imply “a boundless obligation… to improve the world”. How a utilitarian undertakes this general project (of utility maximization) will be a function of how she can affect the world, but it cannot, to Williams, ever be the only project anyone undertakes. If it were the only project, underlain by no other projects, then it will, in Williams words, be “vacuous”.
The utilitarian can argue that her general project will not be the only project, because most people aren’t utilitarian and therefore have their own projects going on. Of course, this only gets us so far. Does this imply that the utilitarian should not seek to convince too many others of her philosophy?
What does it even mean for the general utilitarian project to be vacuous? As best I can tell, what Williams means is that if everyone were utilitarian, we’d all care about maximally increasing the utility of the world, but either be clueless where to start or else constantly tripping over each other (imagine, if you can, millions of people going to sub-Saharan Africa to distribute bed nets, all at the same time). The first order projects that Williams believes must underlay a more general project are things like spending times with friends, or making your family happy. Williams also believes that it might be very difficult for anyone to be happy without some of these more personal projects
I would suggest that what each utilitarian should do is what they are best suited for. But I’m not sure if this is coherent without some coordinating body (i.e. a god) ensuring that people are well distributed for all of the projects that need doing. I can also suppose that most people can’t go that far on willpower. That is to say, there are few people who are actually psychologically capable of working to improve the world in a way they don’t enjoy. I’m not sure I have the best answer here, but my current internal justification leans much more on the second answer than the first.
Which is another way of saying that I agree with Williams; I think utilitarianism would be self-defeating if it suggested that the only project anyone should undertake is improving the world generally. I think a salient difference between us is that he seems to think utilitarianism might imply that people should only work on improving the world generally, whereas I do not.
This discussion of projects leads to Williams talking about the hedonic paradox (the observation that you cannot become happy by seeking out pleasures), although Williams doesn’t reference it by name. Here Williams comes dangerously close to a very toxic interpretation of the hedonic paradox.
Williams believes that happiness comes from a variety of projects, not all of which are undertaken for the good of others or even because they’re particularly fun. He points out that few of these projects, if any, are the direct pursuit of happiness and that happiness seems to involve something beyond seeking it. This is all conceptually well and good, but I think it makes happiness seem too mysterious.
I wasted years of my life believing that the hedonic paradox meant that I couldn’t find happiness directly. I thought if I did the things I was supposed to do, even if they made me miserable, I’d find happiness eventually. Whenever I thought of rearranging my life to put my happiness first, I was reminded of the hedonic paradox and desisted. That was all bullshit. You can figure out what activities make you happy and do more of those and be happier.
There is a wide gulf between the hedonic paradox as originally framed (which is purely an observation about pleasures of the flesh) and the hedonic paradox as sometimes used by philosophers (which treats happiness as inherently fleeting and mysterious). I’ve seen plenty of evidence for the first, but absolutely none for the second. With his critique here, I think Williams is arguably shading into the second definition.
This has important implications for the utilitarian. We can agree that for many people, the way to most increase their happiness isn’t to get them blissed out on food, sex, and drugs, without this implying that we will have no opportunities to improve the general happiness. First, we can increase happiness by attacking the sources of misery. Second, we can set up robust institutions that are conducive to happiness. A utilitarian urban planner would perhaps give just as much thought to ensuring there are places where communities can meet and form as she would to ensuring that no one would be forced to live in squalor.
Here’s where Williams gets twisty though. He wanted us to come to the conclusion that a variety of personal projects are necessary for happiness so that he could remind us that utilitarianism’s concept of negative responsibility puts great pressure on an agent not to have her own personal projects beyond the maximization of global happiness. The argument here seems to be (not for the first time) that utilitarianism is self-defeating because it will make everyone miserable if everyone is a utilitarian.
Smart tried to short-circuit arguments like this by pointing out that he wasn’t attempting to “prove” anything about the superiority of utilitarianism, simply presenting it as an ethical system that might be more attractive if it was better understood. Faced with Williams point here, I believe that Smart would say that he doesn’t expect everyone to become utilitarian and that those who do become utilitarian (and stay utilitarian) are those most likely to have important personal projects that are generally beneficent.
I have the pleasure of reading the blogs and Facebook posts of many prominent (for certain unusual values of prominent) utilitarians. They all seem to be enjoying what they do. These are people who enjoy research, or organizing, or presenting, or thought experiments and have found ways to put these vocations to use in the general utilitarian project. Or people who find that they get along well with utilitarians and therefore steer their career to be surrounded by them. This is basically finding ikigai within the context of utilitarian responsibilities.
Saying that utilitarianism will never be popular outside of those suited for it means accepting we don’t have a universal ethical solution. This is, I think, very pragmatic. It also doesn’t rule out utilitarians looking for ways we can encourage people to be more utilitarian. To slightly modify a phrase that utilitarian animal rights activists use: the best utilitarianism is the type you can stick with; it’s better to be utilitarian 95% of the time then it is to be utilitarian 100% of the time – until you get burnt out and give it up forever.
I would also like to add a criticism of Williams’ complaint that utilitarian actions are overly determined by the actions of others. Namely, the status quo certainly isn’t perfect. If we are to reject action because it is not on the projects we would most like to be doing, then we are tacitly endorsing the status quo. Moral decisions cannot be made in a vacuum and the terrain in which we must make moral decisions today is one marked by horrendous suffering, inequality, and unfairness.
The next two sections of Williams’ essay were the most difficult to parse, but also the most rewarding. They deal with the interplay between calculating utilities and utilitarianism and question the extent to which utilitarianism is practical outside of appealing to the idea of total utility. That is to say, they ask if the unique utilitarian ethical frame can, under practical conditions have practical effects.
To get to the meat of Williams points, I had to wade through what at times felt like word games. All of the things he builds up to throughout these lengthy sections begin with a premise made up of two points that Williams thinks are implied by Smart’s essay.
All utilities should be assessed in terms of acts. If we’re talking about rules, governments, or dispositions, their utility stems from the acts they either engender or prevent.
To say that a rule (as an example) has any effect at all, we must say that it results in some change in acts. In Williams’ words: “the total utility effect of a rule’s obtaining must be cashable in terms of the effects of acts.
Together, (1) and (2) make up what Williams calls the “act-adequacy” premise. If the premise is true, there must be no surplus source of utility outside of acts and, as Smart said, rule utilitarianism should (if it is truly concerned with optimific outcomes) collapse to act utilitarianism. This is all well and good when comparing systems as tools of total assessment (e.g. when we take the universe wide view that I criticized Smart for hiding in), but Williams is first interested in how this causes rule and act utilitarianism to relate with actions
If you asked an act-utilitarian and a rule utilitarian “what makes that action right”, they would give different answers. The act utilitarian would say that it is right if it maximizes utility, but the rule utilitarian would say it is right if it is in accordance with rules that tend to maximize utility. Interestingly, if the act-adequacy premise is true, then both act and rule utilitarians would agree as to why certain rules or dispositions are desirable, namely, that actions that results from those rules or dispositions tends to maximize utility.
(Williams also points out that rules, especially formal rules, may derive utility from sources other than just actions following the rule. Other sources of utility include: explaining the rule, thinking about the rule, avoiding the rule, or even breaking the rule.)
But what to do we do when actually faced with the actions that follow from a rule or disposition? Smart has already pointed out that we should praise or blame based on the utility of the praise/blame, not on the rightness or wrongness of the action we might be praising.
In Williams’ view, there are two problems with this. First, it is not a very open system. If you knew someone was praising or blaming you out of a desire to manipulate your future actions and not in direct relation to their actual opinion of your past actions, you might be less likely to accept that praise or blame. Therefore, it could very well be necessary for the utilitarian to hide why acts are being called good or bad (and therefore the reasons why they praise or blame).
The second problem is how this suggests utilitarians should stand with themselves. Williams acknowledges that utilitarians in general try not to cry over spilt milk (“[this] carries the characteristically utilitarian thought that anything you might want to cry over is, like milk, replaceable”), but argues that utilitarianism replaces the question of “did I do the right thing?” with “what is the right thing to do?” in a way that may not be conducive to virtuous thought.
(Would a utilitarian Judas have lived to old age contentedly, happy that he had played a role in humankind’s eternal salvation?)
The answer to “what is the right thing to do?” is of course (to the utilitarian) “that which has the best consequences”. Except “what is the right thing to do?” isn’t actually the right question to ask if you’re truly concerned with the best consequences. In that case, the question is “if asking this question is the right thing to do, what actions have the best consequences?”
Remember, Smart tried to claim that utilitarianism was to only be used for deliberative actions. But it is unclear which actions are the right ones to take as deliberative, especially a priori. Sometimes you will waste time deliberating, time that in the optimal case you would have spent on good works. Other times, you will jump into acting and do the wrong thing.
The difference between act (direct) and rule (indirect) utilitarianism therefore comes to a question of motivation vs. justification. Can a direct utilitarian use “the greatest total good” as a motivation if they do not know if even asking the question “what will lead to the greatest total good?” will lead to it? Can it only ever be a justification? The indirect utilitarian can be motivated by following a rule and justify her actions by claiming that generally followed, the rule leads to the greatest good, but it is unclear what recourse (to any direct motivation for a specific action) the direct utilitarian has.
Essentially, adopting act utilitarianism requires you to accept that because you have accepted act utilitarianism you will sometimes do the wrong thing. It might be that you think that you have a fairly good rule of thumb for deliberating, such that this is still the best of your options to take (and that would be my defense), but there is something deeply unsettling and somewhat paradoxical about this consequence.
Williams makes it clear that the bad outcomes here aren’t just loss of an agent’s time. This is similar in principle to how we calculate the total utility of promulgating a rule. We accept that the total effects of the promulgation must include the utility or disutility that stems from avoiding it or breaking it, in addition to the utility or disutility of following. When looking at the costs of deliberation, we should also include the disutility that will sometimes come when we act deliberately in a way that is less optimific than we would have acted had we spontaneously acted in accordance with our disposition or moral intuitions.
This is all in the case where the act-adequacy premise is true. If it isn’t, the situation is more complex. What if some important utility of actions comes from the mood they’re done in, or in them being done spontaneously? Moods may be engineered, but it is exceedingly hard to engineer spontaneity. If the act-adequacy premise is false, then it may not hold that the (utilitarian) best world is one in which right acts are maximized. In the absence of the act-adequacy premise it is possible (although not necessarily likely) that the maximally happy world is one in which few people are motivated by utilitarian concerns.
Even if the act-adequacy premise holds, we may be unable to know if our actions are at all right or wrong (again complicating the question of motivation).
Williams presents a thought experiment to demonstrate this point. Imagine a utilitarian society that noticed its younger members were liable to stray from the path of utilitarianism. This society might set up a Truman Show-esque “reservation” of non-utilitarians, with the worst consequences of their non-utilitarian morality broadcasted for all to see. The youth wouldn’t stray and the utility of the society would be increased (for now, let’s beg the question of utilitarianism as a lived philosophy being optimific).
Here, the actions of the non-utilitarian holdouts would be right; on this both utilitarians (looking from a far enough remove) and the subjects themselves would agree. But this whole thing only works if the viewers think (incorrectly) that the actions they are seeing are wrong.
From the global utilitarian perspective, it might even be wrong for any of the holdouts to become utilitarian (even if utilitarianism was generally the best ethical system). If the number of viewers is large enough and the effect of one fewer irrational holdout is strong enough (this is a thought experiment, so we can fiddle around with the numbers such that this is indeed true), the conversion of a hold-out to utilitarianism would be really bad.
Basically, it seems possible for there to be a large difference between the correct action as chosen by the individual utilitarian with all the knowledge she has and the correct action as chosen from the perspective of an omniscient observer. From the “total assessment” perspective, it is even possible that it would be best that there be no utilitarians.
Williams points out that many of the qualities we value and derive happiness from (stubborn grit, loyalty, bravery, honour) are not well aligned with utilitarianism. When we talked about ethnic cleansing earlier, we acknowledged that utilitarianism cannot distinguish between preferences people have and the preferences people should have; both are equally valid. With all that said, there’s a risk of resolving the tension between non-utilitarian preferences and the joy these preferences can bring people by trying to shape the world not towards maximum happiness, but towards the happiness easiest to measure and most comfortable to utilitarians.
Utilitarianism could also lead to disutility because of the game theoretic consequences. On international projects or projects between large groups of people, sanctioning other actors must always be an option. Without sanctioning, the risk of defection is simply too high in many practical cases. But utilitarians are uniquely compelled to sanction (or else surrender).
If there is another group acting in an uncooperative or anti-utilitarian matter, the utilitarians must apply the least terrible sanction that will still be effective (as the utility of those they’re sanctioning still matters). The other group will of course know this and have every incentive to commit to making any conflict arising from the sanction so terrible as to make any sanctioning wrong from a utilitarian point of view. Utilitarians now must call the bluff (and risk horrible escalating conflict), or else abandon the endeavour.
This is in essence a prisoner’s dilemma. If the non-utilitarians carry on without being sanctioned, or if they change their behaviour in response to sanctions without escalation, everyone will be better off (then in the alternative). But if utilitarians call the bluff and find it was not a bluff, then the results could be catastrophic.
Williams seems to believe that utilitarians will never include an adequate fudge factor for the dangers of mutual defecting. He doesn’t suggest pacifism as an alternative, but he does believe that violent sanctioning should always be used at a threshold far beyond where he assesses the simple utilitarian one to lie.
This position might be more of a historical one, in reaction to the efficiency, order, and domination obsessed Soviet Communism (and its Western fellow travelers), who tended towards utilitarian justifications. All of the utilitarians I know are committed classical liberals (indeed, it sometimes seems to me that only utilitarians are classic liberals these days). It’s unclear if Williams’ criticism can be meaningfully applied to utilitarians who have internalized the severe detriments of escalating violence.
While it seems possible to produce a thought experiment where even such committed second order utilitarians would use the wrong amount of violence or sanction too early, this seems unlikely to come up in a practical context – especially considering that many of the groups most keen on using violence early and often these days aren’t in fact utilitarian. Instead it’s members of both the extreme left and right, who have independently – in an amusing case of horseshoe theory – adopted a morality based around defending their tribe at all costs. This sort of highly local morality is anathema to utilitarians.
Williams didn’t anticipate this shift. I can’t see why he shouldn’t have. Utilitarians are ever pragmatic and (should) understand that utilitarianism isn’t served by starting horrendous wars willy-nilly.
Then again, perhaps this is another harbinger of what Williams calls “utilitarianism ushering itself from the scene”. He believes that the practical problems of utilitarian ethics (from the perspective of an agent) will move utilitarianism more and more towards a system of total assessment. Here utilitarianism may demand certain things in the way of dispositions or virtues and certainly it will ask that the utility of the world be ever increased, but it will lose its distinctive character as a system that suggests actions be chosen in such a way as to maximize utility.
Williams calls this the transcendental viewpoint and pithily asks “if… utilitarianism has to vanish from making any distinctive mark in the world, being left only with the total assessment from the transcendental standpoint – then I leave if for discussion whether that shows that utilitarianism is unacceptable or merely that no one ought to accept it.”
This, I think, ignores the possibility that it might become easier in the future to calculate the utility of certain actions. The results of actions are inherently chaotic and difficult to judge, but then, so is the weather. Weather prediction has been made tractable by the application of vast computational power. Why not morality? Certainly, this can’t be impossible to envision. Iain M. Banks wrote a whole series of books about it!
Of course, if we wish to be utilitarian on a societal level, we must currently do so without the support of godlike AI. Which is what utilitarianism was invented for in the first place. Here it was attractive because it is minimally committed – it has no elaborate theological or philosophical commitments buttressing it, unlike contemporaneous systems (like Lockean natural law). There is something intuitive about the suggestion that a government should only be concerned for the welfare of the governed.
Sure, utilitarianism makes no demands on secondary principles, Williams writes, but it is extraordinarily demanding when it comes to empirical information. Utilitarianism requires clear, comprehensible, and non-cyclic preferences. For any glib rejoinders about mere implementation details, Williams has this to say:
[These problems are] seen in the light of a technical or practical difficulty and utilitarianism appeals to a frame of mind in which technical difficulty, even insuperable technical difficulty, is preferable to moral unclarity, no doubt because it is less alarming.
Williams suggests that the simplicity of utilitarianism isn’t a virtue, only indicative of “how little of the world’s luggage it is prepared to pick up”. By being immune to concerns of justice or fairness (except insofar as they are instrumentally useful to utilitarian ends), Williams believes that utilitarianism fails at many of the tasks that people desire from a government.
Personally, I’m not so sure a government commitment to fairness or justice is at all illuminating. There are currently at least two competing (and mutually exclusive) definitions of both fairness and justice in political discourse.
Should fairness be about giving everyone the same things? Or should it be about giving everyone the tools they need to have the same shot at meaningful (of course noting that meaningful is a societal construct) outcomes? Should justice mean taking into account mitigating factors and aiming for reconciliation? Or should it mean doing whatever is necessary to make recompense to the victim?
It is too easy to use fairness or justice as a sword without stopping to assess who it aimed at and what the consequences of the aim is (says the committed consequentialist). Fairness and justice are meaty topics that deserve better than to be thrown around as a platitudinous counterargument to utilitarianism.
A much better critique of utilitarian government can be made by imagining how such a government would respond to non-utilitarian concerns. Would it ignore them? Or would it seek to direct its citizens to have only non-utilitarian concerns? The latter idea seems practically impossible. The first raises important questions.
Imagine a government that is minimally responsive to non-utilitarian concerns. It primarily concerns itself with maximizing utility, but accepts the occasional non-utilitarian decision as the cost it must pay to remain in power (presume that the opposition is not utilitarian and would be very responsive to non-utilitarian concerns in a way that would reduce the global utility). This government must necessarily look very different to the utilitarian elite who understand what is going on and the masses who might be quite upset that the government feels obligated to ignore many of their dearly held concerns.
Could such an arrangement exist with a free media? With free elections? Democracies are notably less corrupt than autocracies, so there are significant advantages to having free elections and free media. But how, if those exist, does the utilitarian government propose to keep its secrets hidden from the population? And if the government was successful, how could it respect its citizens, so duped?
In addition to all that, there is the problem of calculating how to satisfy people’s preferences. Williams identifies three problems here:
How do you measure individual welfare?
To what extent is welfare comparative?
How do you develop the aggregate social preference given the answer to the proceeding two questions?
Williams seems to suggest that a naïve utilitarian approach involves what I’ve think is best summed up in a sick parody of Marx: from each according to how little they’ll miss it, to each according to how much they desire it. Surely there cannot be a worse incentive structure imaginable than the one naïve utilitarianism suggests?
When dealing with preferences, it is also the case that utilitarianism makes no distinction between fixing inequitable distributions that cause discontent or – as observed in America – convincing those affected by inequitable distributions not to feel discontent.
More problems arise around substitution or compensation. It may be more optimific for a roadway to be built one way than another and it may be more optimific for compensation to be offered to those who are affected, but it is unclear that the compensation will be at all worth it for those affected (to claim it would be, Williams declares, is “simply an extension of the dogma that every man has his price”). This is certainly hard for me to think about, even (or perhaps especially) because the common utilitarian response is a shrug – global utility must be maximized, after all.
Utilitarianism is about trade-offs. And some people have views which they hold to be beyond all trade-off. It is even possible for happiness to be buttressed or rest entirely upon principles – principles that when dearly and truly held cannot be traded-off against. Certainly, utilitarians can attempt to work around this – if such people are a minority, they will be happily trammelled by a utilitarian majority. But it is unclear what a utilitarian government could do in a such a case where the majority of their population is “afflicted” with deeply held non-utilitarian principles.
Williams sums this up as:
Perhaps humanity is not yet domesticated enough to confine itself to preferences which utilitarianism can handle without contradiction. If so, perhaps utilitarianism should lope off from an unprepared mankind to deal with problems it finds more tractable – such as that present by Smart… of a world which consists only of a solitary deluded sadist.
Finally, there’s the problem of people being terrible judges of what they want, or simply not understanding the effects of their preferences (as the Americas who rely on the ACA but want Obamacare to be repealed may find out). It is certainly hard to walk the line between respecting preferences people would have if they were better informed or truly understood the consequences of their desires and the common (leftist?) fallacy of assuming that everyone who held all of the information you have must necessarily have the same beliefs as you.
All of this combines to make Williams view utilitarianism as dangerously irresponsible as a system of public decision making. It assumes that preferences exist, that the method of collecting them doesn’t fail to capture meaningful preferences, that these preferences would be vindicated if implemented, and that there’s a way to trade-off among all preferences.
To the potential utilitarian rejoinder that half a loaf is better than none, he points out a partial version of utilitarianism is very vulnerable to the streetlight effect. It might be used where it can and therefore act to legitimize – as “real”– concerns in the areas where it can be used and delegitimize those where it is unsuitable. This can easily lead to the McNamara fallacy; deliberate ignorance of everything that cannot be quantified:
The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.
— Daniel Yankelovich “Corporate Priorities: A continuing study of the new demands on business.” (1972)
This isn’t even to mention something that every serious student of economics knows: that when dealing with complicated, idealized systems, it is not necessarily the non-ideal system that is closest to the ideal (out of all possible non-ideal systems) that has the most benefits of the ideal. Economists call this the “theory of the second best”. Perhaps ethicists might call it “common sense” when applied to their domain?
Williams ultimately doubts that systematic though is at all capable of dealing with the myriad complexities of political (and moral) life. He describes utilitarianism as “having too few thoughts and feelings to match the world as it really is”.
I disagree. Utilitarianism is hard, certainly. We do not agree on what happiness is, or how to determine which actions will most likely bring it, fine. Much of this comes from our messy inbuilt intuitions, intuitions that are not suited for the world as it now is. If utilitarianism is simple minded, surely every other moral system (or lack of system) must be as well.
In many ways, Williams did shake my faith in utilitarianism – making this an effective and worthwhile essay. He taught me to be fearful of eliminating from consideration all joys but those that the utilitarian can track. He drove me to question how one can advocate for any ethical system at all, denied the twin crutches of rationalism and theology. And he further shook my faith in individuals being able to do most aspects of the utilitarian moral calculus. I think I’ll have more to say on that last point in the future.
But by their actions you shall know the righteous. Utilitarians are currently at the forefront of global poverty reduction, disease eradication, animal suffering alleviation, and existential risk mitigation. What complexities of the world has every other ethical system missed to leave these critical tasks largely to utilitarians?
Williams gave me no answer to this. For all his beliefs that utilitarianism will have dire consequences when implemented, he has no proof to hand. And ultimately, consequences are what you need to convince a consequentialist.