Economics, Politics, Quick Fix

Against Degrowth

Degrowth is the political platform that holds our current economic growth as unsustainable and advocates for a radical reduction in our resource consumption. Critically, it rejects that this reduction can occur at the same time as our GDP continues to grow. Degrowth, per its backers, requires an actual contraction of the economy.

The Canadian New Democratic Party came perilously close to being taken over by advocates of degrowth during its last leadership race, which goes to show just how much leftist support the movement has gained since its debut in 2008.

I believe that degrowth is one of the least sensible policies being advocated for by elements of the modern left. This post collects my three main arguments against degrowth in a package that is easy to link to in other online discussions.

To my mind, advocates of degrowth fail to advocate a positive vision of transition to a less environmentally intensive economy. North America is already experiencing a resurgence in forest cover, land devoted to agriculture worldwide has been stable for the past 15 years (and will probably begin to decline by 2050), as arable land use per person continues to decrease. In Canada, CO2 emissions per capita peaked in 1979, forty years ago. Total CO2 emissions peaked in 2008 and CO2 emissions per $ of GDP have been continuously falling since 1990.

All of this is evidence of an economy slowly shifting away from stuff. For an economy to grow as people turn away from stuff, they have to consume something else, for consumers often means services and experiences. Instead of degrowth, I think we should accelerate this process.

It is very possible to have GDP growth while rapidly decarbonizing an economy. This simply looks like people shifting their consumption from things (e.g. cars, big houses) towards experiences (locally sourced dinners, mountain biking their local trails). We can accelerate this switch by “internalizing the externality” that carbon presents, which is a fancy way of saying “imposing a tax on carbon”. Global warming is bad and when we actually make people pay that cost as part of the price tag for what they consume, they switch their consumption habits. Higher gas prices, for example, tend to push consumers away from SUVs.

A responsible decarbonisation push emphasises and supports growth in local service industries to make up for the loss of jobs in manufacturing and resource extraction. There’s a lot going for these jobs too; many of them give much more autonomy than manufacturing jobs (a strong determinant of job satisfaction) and they are, by their nature, rooted in local communities and hard to outsource.

(There are, of course, also many new jobs in clean energy that a decarbonizing and de-intensifying economy will create).

If, instead of pushing the economy towards a shift in how money is spent, you are pushing for an overall reduction in GDP, you are advocating for a decrease in industrial production without replacing it with anything. This is code for “decreasing standards of living”, or more succinctly, “a recession”. That is, after all, what we call a period of falling GDP.

This, I think is the biggest problem with advocating degrowth. Voters are liable to punish governments even for recessions that aren’t their fault. If a government deliberately causes a recession, the backlash will be fierce. It seems likely there is no way to continue the process of degrowth by democratic means once it is started.

This leaves two bad options: give over the reins of power to a government that will be reflexively committed to opposing environmentalists, or seize power by force. I hope that it is clear that both of these outcomes to a degrowth agenda would be disastrous.

Advocates of degrowth call my suggestions unrealistic, or outside of historical patterns. But this is clearly not the case; I’ve cited extensive historical data that shows an ongoing trend towards decarbonisation and de-intensification, both in North America and around the world. What is more unrealistic: to believe that the government can intensify an existing trend, or to believe that a government could be elected on a platform of triggering a recession? If anyone is guilty of pie-in-the-sky thinking here, it is not me.

Degrowth steals activist energy from sensible, effective policy positions (like a tax on carbon) that are politically attainable and likely to lead to a prosperous economy. Degrowth, as a policy, is especially easy for conservatives to dismiss and unwittingly aids them in their attempts to create a false dichotomy between environmental protection and a thriving economy.

It’s for these three reasons (the possibility of building thriving low carbon economies, the democratic problem, and the false dichotomy degrowth sets up) that I believe reasonable people have a strong responsibility to argue against degrowth, whenever it is advocated.

(For a positive alternative to degrowth, I personally recommend ecomoderism, but there are several good alternatives.)

History, Literature

Book Review: The Horse The Wheel And Language

The modern field of linguistics dates from 1786, when Sir Willian Jones, a British judge sent to India to learn Sanskrit and serve on the colonial Supreme Court, realized just how similar Sanskrit was to Persian, Latin, Greek, Celtic, Gothic, and English (yes, he really spoke all of those). He concluded that the similarities in grammar were too close to be the result of chance. The only reasonable explanation, he claimed, was the descent of these languages from some ancient progenitor.

This ancestor language is now awkwardly known as Proto-Indo-European (PIE). It and the people who spoke it are the subject of David Anthony’s book The Horse The Wheel And Language [1]. I picked up the book hoping to learn a bit about really ancient history. I ended up learning some of that, but this is more a book about linguistics and archeology than about history.

Proto-Indo-European speakers produced no written works, so almost all of their specific history is lost. The oldest products of their daughter languages – like the Rig Veda – date from well after the last speakers of the original language passed away.

Instead of the history that is largely barred to us, this book is really Professor David Anthony attempting to figure out who these speakers were and what their lives looked like, without the benefit of any written words. He does this via two channels: their language, and the physical remains of their culture.

Unfortunately, there is at least one glaring problem with each approach. Their language is thoroughly dead and there was (at the time of writing) no scholarly consensus on where they originated.

Professor Anthony is undaunted by these problems. It turns out that we can reconstruct their language and from that reconstruction, determine where they most likely lived. If both approaches are done properly, it should be possible to see archeological details reflected in their language and details of their language reflected in their remains.

The first problem to solve then is the reconstruction of PIE. How does one do this?

Well it turns out that all languages change in similar ways. The way we pronounce consonants often shift, with hard sounds sometimes changing into soft sounds, but very rarely the reverse. How we say words also changes. Assimilation occurs because we tend to omit difficult to pronounce or inconvenient middle syllables (this has led to the invention of contractions in English) and addition happens because we add syllables in the middle of difficult tongue movements (compare the “proper” and colloquial ways of pronouncing the word “nuclear” or the difference between the French athlète and the English athlete).

It would be very odd for an additional syllable to be added in an area where tongue movements aren’t particularly hard, or a syllable to be removed from a word that is typically enunciated. Above all, these changes are regular because they rely on predictable laziness.

Changes tend to happen to many words at once. When people began to hear the Proto-French tsentum (root of cent, the French word for 100) as different from the Latin kentum, they had to make a decision about how exactly it would be pronounced. They chose a soft-c, a sound Latin lacks, but that is easier to say. This change got carried over to every ts-, c-, or k-, that had previously made the same sound as kentum/tsentum, except those before a back vowel (like “o”), presumably because a soft sound there is actually harder to say [2].

There’s one final type of change that Anthony mentions: analogy. This is where a grammatical rule used in a single place (e.g. pluralization with -s or -es) is expanded to encompass many more words or cases (most English nouns were originally pluralized with other suffixes, or with stem changes like “geese”; it was only later that people decided -s and -es would be the general markers of plural nouns).

If you have a large sample of languages descended from a historical language (and with Proto-Indo-European, there really is no lack), you can follow a bunch of words backwards through likely changes and see if they all end up in the same place.

If you do this for the modern words for “hundred” from many PIE daughter languages, you’re left with *km’tom (an asterisk is used before sounds where there is no direct evidence). All words for hundred in modern descendants (as well as dead ancient descendants that we know how to speak) of Proto-Indo-European can be derived from *km’tom using only well-attested to and empirically observed rules of language change.

(I occasionally got chills reading reconstructed words. It’s amazing how some words that our distant ancestors spoke thousands upon thousands of years ago are fairly well preserved in our modern speech.)

This is pretty cool, because it allows us to start seeing which words were common enough in Proto-Indo-European to be passed down to all daughters and which words were borrowed in.

With a reconstructed vocabulary of about 1,500 words, we can figure out some things that were important to Proto-Indo-Europeans. They seem to have words for relatives on the male side, but not the female side. This suggests that after marriage, the wife moved in with the groom. Less domestically, they seemed to have a word for cattle rustling, suggesting that they weren’t unfamiliar with increasing their wealth at the expense of their neighbours’.

That’s not all we can get from their words. Linguists also believe that Proto-Indo-Europeans had chiefs, who in turn had patrons. They worshipped a male sky deity and sacrificed horses and cattle to him. They formed warrior bands. They avoided speaking the name of the bear. They drove, or knew of, wagons. And they had two words that we could translate as sacred, “that which is forbidden” and “that which is imbued with holiness”.

(There are many more minor cultural touchstones scattered throughout the book. I don’t want to spoil them all.)

We also know the animals and plants they had words for. Reconstructed PIE has words for temperate trees, horses and cows, bees and honey.

These give us clues to where they lived, in the same way that knowing the words “shinney”, “hockey”, “Zamboni” and “creek” are spoken somewhere might help you make a guess as to where that somewhere is.

And while these words help us rule out the Mediterranean and the deserts, they don’t give us much in the way of a specific location without a when, which requires two different methods.

First, we can figure out the approximate death of Proto-Indo-European, the approximate century or millennium when it was entirely splintered into its daughters, by using what linguists have discovered about the rate of language change.

While most vocabulary changes rather quickly, making this a poor tool for dating very old languages, there are a group of words, the core vocabulary, that change much more slowly. The core vocabulary of any language is only a couple hundred words, but they’re some of the most important ones. Normally, core vocabulary includes the words for: body parts, small numbers, close relatives, a few basic needs, a couple of natural features or domesticated animals, some pronouns, and some conjunctions.

English, a prolific borrower, has borrowed 50% of its total vocabulary from the romance languages. It’s core vocabulary, however, is largely free of this borrowing, with only 4% of core vocabulary words borrowed from romance languages.

Core vocabulary changes by about 14-19% every thousand years depending on the language. It’s also known that once two dialects differ by more than 10% of their core vocabulary, they are more properly thought of as separate languages.

Here’s where written language comes in handy. By comparing written inscriptions with known creation dates in different daughter languages, we can make a guess as to when the languages diverged.

The oldest inscriptions in a PIE-derived language are in the Anatolian languages (which were spoken in what is now Turkey). However, Anthony chooses not to use these, because they entirely lack many grammatical innovations that are otherwise common in daughter languages. This leads him to believe that they split away much earlier than other daughters. The presence of later shared innovations means that at the time of the Anatolian split, Proto-Indo-European was probably still a living language and still evolving.

Better candidates are archaic Greek and Old-Indic, both of which have inscriptions dated to around 1,450 BCE. By comparing the differences in wording and grammar between these two and using known rates of change, Anthony dates the end of Proto-Indo-European at around 2,500 BCE. This means that after 2,500 BCE, it doesn’t make sense to speak of a single unified Proto-Indo-European language.

Second is the birth date, the other half of the critical window. To find it, Anthony looks for words that have a known date of invention, specifically “wool” and “wagon”. Getting broadly useful amounts of wool from sheep wasn’t possible until a mutation made sheep coats much larger. We know roughly when this mutation occurred, because sheep suddenly became a larger portion of herds around 3,500 BCE, displacing goats (which produce more milk). The only reasonably explanation for this event is the advent of wool producing sheep, which were very valuable as a source of clothes.

Similarly, wagons have left physical evidence (both directly and in preserved images) and that evidence has been carbon dated to 3,500 BCE [3].

Since all Proto-Indo-European languages outside of the Anatolian branch have related words for both “wagon” and “wool” that show no evidence of borrowing from other languages, it seems reasonable to conclude that some form of the language existed when wagons and wool first began to reshape the pre-historic world. That means the language had to exist by 3,500 BCE.

There is, I should note, one competing theory that Anthony outlines, in which PIE and Indo-Hittite languages split around 7,500 BCE. This theory requires several unlikely things to happen however; it requires the word for wagon to evolve from the same verb meaning “to turn” in both branches (five similar verbs existed), it requires the PIE speaking people to disperse over all of Europe and become the dominant culture then (this would have been very hard pre-horse domestication, when material cultures were small and language territories tended to be much smaller than modern countries), and all of this would have to happen while material cultures were becoming very different but languages (supposedly) weren’t evolving.

Anthony doesn’t give this theory much credence.

With a rough time-range, we can begin looking for our Proto-Indo-Europeans in space. Anthony does this by looking for evidence of very old loan words. He finds a set coming from Uralic, which also has a bevy of very old loanwords from PIE [4].

Uralic (appropriately) probably first emerged somewhere near the Ural Mountains. This corresponds well with our other evidence because the area around the Urals (where borrowing could have taken place) is temperate and home to the flora and fauna words we know exist in PIE.

The PIE word for honey, *médhu (note its similarity with the English word for a fermented honey drink, “mead” [5]), is particularly useful here. We know that bees weren’t common in Siberia during the time when we suspect PIE was being spoken (and where they were common, the people weren’t herders), but that bees were common on the other side of the Urals.

Laying it all out, we see that PIE speakers were herders (there’s an expansive set of words relating to the tasks herders must accomplish), who lived near the Urals but not in Siberia. The best archeological match for these criteria is a set of herder people who lived in what is now modern-day Ukraine and it is these people that Anthony identifies as the Proto-Indo-Europeans.

If this feels at all dry, I want to assure you that it wasn’t when I read it. I felt that the first section of the book was the strongest. Anthony provides an excellent overview of linguistics, archeology, and some of the crazy stuff he’s had to invent to help him in his studies.

For example, he believes that horses were ridden much earlier than was commonly thought, perhaps around or before 3,500 BCE. To prove this, him and his wife embarked on a study of how bits wear teeth in horses’ mouths, which culminated in empirical studies with a variety of bit types (including rope) done on live horses that had never been previously given bits, assessed using electron microscopy. The whole thing is a bit bonkers, but it has resulted in a validated test that allows archeologists to determine if a given horse was ever ridden, as well as vindication for Anthony’s chronology of domestication.

Unfortunately, a lot of the rest of the book was genuinely dry. There was a dizzying array of cultures inhabiting the Eurasian steppes in the period Anthony covers, each with their own house type, pottery type, antecedents, and descendants. Anthony goes through these in excruciating detail. It’s the sort of thing that other archeologists love him for – a lot of these cultures are very poorly described outside of Russian language publications – but it’s hard for a lay-person to follow. I may have pulled it off if I built a giant flow chart, but as it was, I mostly felt overwhelmed.

(Anthony has to go through them all to explain how PIE-derived languages ended up everywhere we know them to have. People of Europe don’t speak PIE-derived languages just because of Latin. Many people the Romans conquered spoke languages that were distantly related to the invader’s tongue. Those languages need to be accounted for in any theory about Proto-Indo-Europeans.)

This is disappointing, because the history started off so engagingly. Anthony outlines how the earliest ancestors of the Proto-Indo-Europeans had persistent cultural frontiers with hunter-gatherers on the Urals on one side and the farmers in the Bug-Dniester valley on the other.

The herding and farming economies required a moral shift from previous hunter-gatherer practices, one that would see agriculturalists harden their hearts to their own children starving, if the only thing that could assuage their hunger was their last few breeding pairs or their seed grain. This is the first time I saw someone lay out the moral transformation necessary to accept agricultural and having it laid out so starkly made it much easier to understand why not every pre-historic group was willing to adopt it.

(I had always thought the biggest moral change was accepting accumulation of wealth, but this one is, I think, more important.)

This is not to say that the herders and farmers were exactly alike; their different ways of life meant they were culturally distinct. In addition to their dwellings and material culture, they differed in funeral customs and probably in religion. Everything we know about early-PIE speakers suggest that they worshipped a sky god of some sort. The farmers who lived next door decorated their houses with female figurines, figures that never show up in any excavation of herder camps or grave sites.

I was also shocked at the amount of long distance trade and the wealth acquisition that was going on 6,000 years ago. There are kurgans (circular rock topped graves) with grave goods from Mesopotamia dating from that long ago, as well as one kurgan where someone was buried with almost 4 kilograms of gold ornamentation.

The herders and farmers didn’t live next door in harmony forever. Changes to their stable arrangement happened as a result of one of the Earth’s period historical climate fluctuations (which caused a collapse among many of the farmers and may have led to more raiding from the early-PIE speaking herders) and later the adoption of horse-riding (which made raiding easier) and wagons (which allowed herders to bring water with them and opened the inner steppes up to grazing).

Larger herds and changing boundaries led to clashes among the herders (we’ve found kurgans where the bodies bear marks of violent deaths) and to raids on agriculturalists (we’ve found burned villages peppered with arrows), although interestingly, never the farmers directly adjacent to the steppes. It may be that the herders didn’t want to disrupt their trading relationships with their neighbours and so were careful to raid dozens of kilometers away from their own borders (a task made easier with horses).

The farmers were no pushovers; some of their towns held up to 10,000 people by the third millennium BCE. These towns were bigger than the cities of Mesopotamia, but lacked the civic organizational features of the true cities of the Fertile Crescent.

And it was at about this point in the narrative where the number of cultures proliferated beyond my ability to follow and I began writing down interesting facts rather than keeping track of the grand narrative.

Here are a few that I liked the most:

  • About 20% of corpses in warrior graves (those with weapons and other symbols of membership in warrior society) whose gender is known are female. This matches the percentage in much later steppe graves. As Kameron Hurley said, women have always fought.
  • Contrary to popular stereotypes, the cultures of the Eurasia steppes weren’t reliant on cities for manufactured goods. They had their own potters and metalsmiths and they made many mining camps. In fact, by the 2000s BCE, it seems that Mesopotamian cities were dependent metal mined on the steppes,
  • In the early Bronze Age, tin was worth its weight in silver. When tin wasn’t available, bronze was made with arsenic.
  • Horses were probably domesticated because they winter better than the other animals that were available in Eurasia at the time. Cows will starve to death if grass is hidden by snow, while sheep and goats use their nose to move snow off of grass (which means that they’re helpless once it’s covered in ice). Sheep, cows, and goats are all unable to drink water that is covered in ice. Horses break ice and move snow with their hooves, making winter no real inconvenience to them. Mixing horses with cows can allow cows to eat the grass that horses uncover.
  • Disaffected farmers may have been attracted to the herding economy because wealth was much easier to build up. Farmland is hard to acquire more of without angering your neighbours, but herds given good pasture will naturally grow exponentially. A lot of the spread of the herding economy into Europe probably used some sort of franchise system, where locals joined the PIE culture and were given some animals, in exchange for providing protection and labour to their patron.

I’ve struggled through a lot of books that are clearly meant for people more knowledgeable in the subject than I am. It might just be a function of how interested I am in archeology (that is to say: only tolerably interested) that this is the first of them that I wish had an abridged edition. If you aren’t deeply interested in archaeology or pre-history, there’s a lot of this book that you’ll probably end up skimming.

The rest of it makes up for that. But I think there would be market for Anthony to write another leaner volume, meant for a more general audience.

If he ever does, I’ll probably give it a read.


[1] David Anthony is very sensitive to the political ends that some scholars of Proto-Indo-European have turned to. He acknowledges that white supremacists appropriated the self-designation of “Aryan” used by some later speakers of PIE-derived languages and used it to refer to some sort of ancient master race. Professor Anthony does not buy into this one bit. He points out that Aryan was always a cultural term, not a racial one (showing the historical ignorance of the racists) and he is careful to avoid assigning any special moral or mythical virtue to the Proto-Indo-Europeans whose culture he studies.

White supremacists will find nothing to like about this book, unless they engage in a deliberate misreading. ^

[2] This is why the French côte is still similar to the Latin costa. ^

[3] Anthony identifies improvements in carbon dating, especially improvements in how we calibrate for diets high in fish (which contain older carbon, leading to incorrect ages) as a major factor in his ability to untangle the story of the Proto-Indo-Europeans. ^

[4] Uralic is the language family that in modern times includes Finnish and some languages spoken in Russia. ^

[5] While looking up the word *médhu, I found out that it is also likely the root of the Old Chinese word for honey, via an extinct Proto-Indo-European language, Tocharian. The speakers of Tocharian migrated from the Proto-Indo-European homeland to Xinjiang, in what is now China, which is likely where the borrowing took place. ^

Model, Politics, Quick Fix

The Nixon Problem

Richard Nixon would likely have gone down in history as one of America’s greatest presidents, if not for Watergate.

To my mind, his greatest successes were détente with China and the end of the convertibility of dollars into gold, but he also deserves kudos for ending the war in Vietnam, continuing the process of desegregation, establishing the EPA, and signing the anti-ballistic missile treaty.

Nixon was willing to try unconventional solutions and shake things up. He wasn’t satisfied with leaving things as they were. This is, in some sense, a violation of political norms.

When talking about political norms, it’s important to separate them into their two constituent parts.

First, there are the norms of policy. These are the standard terms of the debate. In some countries, they may look like a (semi-)durable centrist consensus. In others they may require accepting single-party rule as a given.

Second are the norms that constrain the behaviour of people within the political system. They may forbid bribery, or self-dealing, or assassinating your political opponents.

I believe that the first set of political norms are somewhat less important than the second. The terms of the debate can be wrong, or stuck in a local maximum, such that no simple tinkering can improve the situation. Having someone willing to change the terms of the debate and try out bold new ideas can be good.

On the other hand, it is rarely good to overturn existing norms of political behaviour. Many of them came about only through decades of careful struggle, as heroic activists have sought to place reasonable constraints on the behaviour of the powerful, lest they rule as tyrants or pillage as oligarchs.

The Nixon problem, as I’ve taken to describing it, is that it’s very, very hard to find a politician who can shake up the political debate without at the same time shaking up our much more important political norms.

Nixon didn’t have to cheat his way to re-election. He won the popular vote by the highest absolute margin ever, some 18 million votes. He carried 49 out of 50 states, losing only Massachusetts.

Now it is true that Nixon used dirty tricks to face McGovern instead of Muskie and perhaps his re-election fight would have been harder against Muskie.

Still, given Muskie’s campaign was so easily derailed by the letter Nixon’s “ratfuckers” forged, it’s unclear how well he would have done in the general election.

And if Muskie was the biggest threat to Nixon, there was no need to bug Watergate after his candidacy had been destroyed. Yet Nixon and his team still ordered this done.

I don’t think it’s possible to get the Nixon who was able to negotiate with China without the Nixon who violated political norms for no reason at all. They were part and parcel with an overriding belief that he knew better than everyone else and that all that mattered was power for himself. Regardless, it is clear from Watergate that his ability to think outside of the current consensus was not something he could just turn off. Nixon is not alone in this.

One could imagine a hypothetical Trump (perhaps a Trump that listened to Peter Thiel more) who engaged mostly in well considered but outside-of-the-political-consensus policies. This Trump would have loosened FDA policies that give big pharma an unfair advantage, ended the mortgage tax deduction, and followed up his pressure on North Korea with some sort of lasting peace deal, rather than ineffective admiration of a monster.

The key realization about this hypothetical Trump is that, other than his particular policy positions, he’d be no different. He’d still idolize authoritarian thugs, threaten to lock up his political opponents, ignore important government departments, and surround himself with frauds and grifters.

I believe that it’s important to think how the features of different governments encourage different people to rise to the top. If a system of government requires any leader to first be a general, then it will be cursed with rigid leaders who expect all orders to be followed to the letter. If it instead rewards lying, then it’ll be cursed with politicians who go back on every promise.

There’s an important corollary to this: if you want a specific person to rule because of something specific about their character, you should not expect them to be able to turn it off.

Justin Trudeau cannot stop with the platitudes, even when backed into a corner. Donald Trump cannot stop lying, even when the truth is known to everyone. Richard Nixon couldn’t stop ignoring the normal way things were done in Washington, even when the normal way existed for a damn good reason.

This, I think, is the biggest mistake people like Peter Thiel made when backing Trump. They saw a lot of problems in Washington and correctly concluded that no one who was steeped in the ways of Washington would correct them. They decided that the only way forward was to find someone brash, who wouldn’t care about how things were normally done.

But they didn’t stop and think how far that attitude would extend.

Whenever someone tells you that a bold outsider is just what a system needs, remember that a Nixon who never did Watergate couldn’t have gone to China. If you back a new Nixon, you better be willing for a reprise.

Model, Philosophy, Quick Fix

Post-modernism and Political Diversity

I was reading a post-modernist critique of capitalist realism – the resignation to capitalism as the only practical way to organize a society, arising out of the failure of the Soviet Union – and I was struck by something interesting about post-modernism.

Insofar as post-modernism stands for anything, it is a critique of ideology. Post-modernism holds that there is no privileged lens with which to view the world; that even empiricism is suspect, because it too has a tendency to reproduce and reify the power structures in which in exists.

A startling thing then, is the sterility of the post-modernist political landscape. It is difficult to imagine a post-modernist who did not vote for Bernie Sanders or Jill Stein. Post-modernism is solely a creature of the left and specifically that part of the left that rejects the centrist compromise beloved of the incrementalist or market left.

There is a fundamental conflict between post-modernism’s self-proclaimed positioning as an ideology without an ideology – the only ideology conscious of its own construction – and its lack of political diversity.

Most other ideologies are tolerant of political divergence. Empiricists are found in practically every political party (with the exception, normally, being those controlled by populists) because empiricism comes with few built in moral commitments and politics is as much about what should be as what is. Devout Catholics also find themselves split among political parties, as they balance the social justice and social order messages of their religion. You will even, I would bet, find more evangelicals in the Democratic party than you will find post-modernists in the Republican party (although perhaps this would just be an artifact of their relative population sizes).

Even neoliberals and economists, the favourite target of post-modernists, find their beliefs cash out to a variety of political positions, from anarcho-capitalism or left-libertarianism to main-street republicanism.

It is hard to square the narrowness of post-modernism’s political commitments with its anti-ideological intellectual commitments. Post-modernism positions itself in communion with the Real, that which “any [constructed, as through empiricism] ‘reality’ must suppress”. Yet the political commitments it makes require us to believe that the Real is in harmony with very few political positions.

If this were the actual position of post-modernism, then it would be vulnerable to a post-modernist critique. Why should a narrow group of relatively privileged academics in relatively privileged societies have a monopoly on the correct means of political organization? Certainly, if economics professors banded together to claim they had discovered the only means of political organization and the only allowable set of political beliefs, post-modernists would be ready with that spiel. Why then, should they be exempt?

If post-modernism instead does not believe it has found a deeper Real, then it must grapple with its narrow political attractions. Why should we view it as anything but a justification for a certain set of policy proposals, popular among its members but not necessarily elsewhere?

I believe there is value in understanding that knowledge is socially constructed, but I think post-modernism, by denying any underlying physical reality (in favour of a metaphysical Real) removes itself from any sort of feedback loop that could check its own impulses (contrast: empiricism). And so, things that are merely fashionable among its adherents become de facto part of its ideology. This is troubling, because the very virtue of post-modernism is supposed to be its ability to introspect and examine the construction of ideology.

This paucity of political diversity makes me inherently skeptical of any post-modernist identified Real. Absent significant political diversity within the ideological movement, it’s impossible to separate an intellectually constructed Real from a set of political beliefs popular among liberal college professors.

And “liberal college professors like it” just isn’t a real political argument.

Model, Politics

The Character of Leaders is the Destiny of Nations

The fundamental problem of governance is the misalignment between means and ends. In all practically achievable government systems, the process of acquiring and maintaining power requires different skills than the exercise of power. The core criteria of any good system of government, therefore, must be selecting people by a metric that bears some resemblance to governing, or perhaps more importantly, having a metric that actively filters out people who are not suited to govern.

When the difference between means and ends becomes extreme, achieving power serves only to demonstrate unsuitability for holding it. Such systems are inevitably doomed to collapse.

Many people (I am thinking most notably of neo-reactionaries) put too much stock in the incentives or institutions of government systems. Neo-reactionaries look at the institutions of monarchies and claim they lead to stability, because monarchs have a large personal incentive to improve their kingdom and their lifetime tenure should afford them a long time horizon.

In practice, however, monarchies are rather unstable. This is because monarchs are chosen by accident of birth and may have little affinity for the patient business of building a nation. In addition, to maintain power, monarchs must be responsive to the aristocracy. This encourages the well documented disdain for the peasantry that was common in monarchical governments.

Monarchy, like many other systems of government, was not doomed so much by its institutions, as by its process for choosing a leader. The character of leaders is the destiny of nations and many forms of government have no way of picking people with a character conducive to governing well.

By observing the pathologies of failed systems of government, it becomes possible to understand why democracy is a uniquely successful form of government, as well as the risks that emergent social technologies pose to democracy.


“Lenin’s core of original Bolsheviks… were many of them highly educated people…and they preserved these elements even as they murdered and lied and tortured and terrorised. They were social scientists who thought principle required them to behave like gangsters. But their successors… were not the most selfless people in Soviet society, or the most principled, or the most scrupulous. They were the most ambitious, the most domineering, the most manipulative, the most greedy, the most sycophantic.” – Francis Spufford, Red Plenty

The revolution that created the USSR was one founded on high minded ideals. The revolutionaries were going to create a new society, one that was fair, equal, and perfect; a utopia on earth. Yet, the bloody business of carving out a new state often stood in stark contrast to these ideals – as is common in revolutions.

It is, as a rule, difficult to tell which revolutions will lead to good rule and which to bloody shambles and repression. Take, as an example, the Eritrean People’s Liberation Front. They started as an egalitarian organization that treated prisoners of war with respect and ended up as one of the most brutal governments in the world.

Seizing power in a revolution requires a grasp of military tactics and organization; the ability to build a parallel state apparatus in occupied areas; the ability to inspire people to fight for your side; and a grasp of propaganda. While there is overlap with the skills necessary for civilian rule here, the perspective of a rebel is particularly poorly suited to governing according to the rule of law.

It is hard to win a revolution without coming to believe on some fundamental level that might makes right. The 20th century is littered with examples of rebels who cannot put aside this perspective shift when they transition to civilian rule.

(This, incidentally, is why nonviolent resistance leads to more stable governments and why repressive governments are so scared of it. A successful non-violent revolution leaves much less room for the dictator’s eventual return.)

It was so with the Soviets. Might makes right – perhaps more so even than communism – was the founding ideal of the Soviet Union.

Stalin succeeded Lenin as the leader of the Soviet Union via political manoeuvering, backstabbing, and the destruction of his enemies, tactics that would become key in future transfers of power.

To grasp the reins of the Soviet Union, it became necessary to view people as tools; to bribe key constituencies, to control the secret police, and to placate the army.

And this set of tools is not well suited to governing a prosperous nation. Attempts to reform the USSR with shadow prices, perhaps the only thing that could have saved communism, failed because shadow prices represented a loss of central control. If prices were not set politically, it would be impossible to manipulate them to reward compatriots and guarantee stability.

It’s true that its combination economic system and ambitions doomed the Soviet Union right from the start. It could not afford to be a global superpower while constrained by an economic philosophy that sharply limited its growth and guaranteed frequent shortages. But both of these were, in theory, mutable. It was only with such an ossifying process for choosing leaders that the Soviet Union was destined for failure.

In the USSR, legitimacy didn’t come from the people, but from the party apparatus. Bold changes, of the sort necessary to rescue the Soviet economy were unthinkable because they cut against too many entrenched interested. The army budget could not be decreased because the leader needed to maintain control of the army. The economic system couldn’t be changed because of how tightly the elite were tied to it.

The USSR needed bold, pioneering leaders who were willing to take risks and shake up the system. But the system guaranteed that those leaders would never rule. And so, eventually, the USSR fell.

Military Dictatorships

“The difference between a democracy and a dictatorship is that in a democracy you vote first and take orders later; in a dictatorship you don’t have to waste your time voting.” – Charles Bukowski

Military dictatorships that fall all fall in the same way: with an increasingly isolated junta issuing orders that are ignored by increasingly large swathes of the populace. The act of rising to the top of a military inculcates a belief that victory can always be achieved by finding the right set of orders. This is the mindset that military dictators bring to governing and it always leads to disaster. Whatever virtues of organization or delegation generals learn, it is never enough to overcome this central flaw.

Governing a modern state requires flexibility. There are always many constituencies: business owners, workers, teachers, doctors. There are often many regions, each with different economic needs. To support resource extraction can harm manufacturing – and vice versa. Bureaucrats have their own pet projects, their own red lines, and their own ideas.

This environment is about as different as it’s possible to be from an army. The military tells soldiers to follow orders. Civilians are rather worse at this task.

Expecting a whole society to follow orders, to put their own good aside for someone else’s plan is folly. Enough people will always buck orders to make a mockery of any grand design.

It is for this reason that military governments are so easy to satirize. Watching career soldiers try and herd cats can be darkly amusing, although the humour is quickly lost if one dwells too long on the atrocities military governments turn to when thwarted.

After all, the flip side of discipline is punishment. Failing to obey orders in the military is normally a crime, whereas failing to obey orders in the civil service is often par for the course. When these two mindsets collide, a junta is likely to impose harsh punishments on anyone disobeying. This doesn’t spring naturally from their position as dictators – most juntas start out with stunning idealistic beliefs about national salvation – but does spring naturally from military regulations. And so again we see a case where it is the background of the leaders, not the structure of the dictatorship that leads to the worst excesses.

You can replace the leaders as often as you like or tweak the laws, but as long as you keep appointing generals to rule, you will find they expect orders to be obeyed unquestioningly and respond harshly to any perceived disloyalty.

There is one last great vice of military dictatorships: a tendency to paper over domestic discontent with foreign wars. Military dictators know that revanchist wars can create popular support, so foreign adventuring is often their response when their legitimacy begins to crumble.

Off the top of my head, I can think of two wars started by military dictatorships seeking to improve their standing (the Falkland War and Six-Day War). No doubt a proper survey would turn up many others.

Since the time of Plato, soldier-rulers have been held up as the ideal heads of state. It is perhaps time to abandon this notion.


“Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.” – Winston Churchill to the House of Commons

To gain power in a democracy, a politician needs to win election. This normally requires some skill in oratory and debate, the ability to delegate to competent subordinates, the ability to come up with a plan and clearly articulate how it will improve people’s lives, possibly some past experience governing that paints a flattering picture, and above all a good reputation with enough people to win an election. This oft-maligned “popularity contest” is actually democracy’s secret weapon.

Democracy is principally useful as a form of government that is resistant to corruption. Corruption is the act of arrogating state power to take benefits for yourself or give them to your friends. Persistent and widespread corruption is one of the biggest impediments to growth worldwide, so any technology (and government system are a type of cultural technology) that reduces corruption is a powerful force for human flourishing.

It is the requirement for a good reputation that helps democracy stand against corruption. In any society where corruption is scorned, democracy ensures that no one who is visibly corrupt can grasp power; if corruption is sufficient to ruin a reputation, no one who is corrupt can win a “popularity contest”.

(It is also worth noting that the demand for a sterling reputation rules out people who have tortured dissidents or ordered protestors shot. As long as autocrats are not revered, democracy can protect against many forms of repression.)

There are three main ways the democracy can fail to live up to its promise. First, it can fail because corruption isn’t appropriately sanctioned. If corruption becomes just the way things are done and scandals stop sticking, then democracy becomes much weaker as a check on corruption.

Second, democracy can be hijacked by individuals whose only skill is self-promotion. In a functioning democracy, the electorate demands that political resumes include real achievements. When this breaks down, democracy becomes a contest: who can disseminate their fake or exaggerated resume the furthest.

It is from this perspective that 24/7 news and social media present a threat to democracy. Donald Trump is an excellent example of this failure mode. He made use of viral lies and controversial statements to ensure that he was in front of as many voters as possible. His largely fake reputation for business acumen was enough to win over a few others.

There are many constituencies in all societies. Demonstrably, President Trump is not popular in America, but he appealed to enough people that he was able to build up a solid voting block in the primaries.

Beyond the primaries Trump demonstrated the third vulnerability of democracies: partisanship. Any democracy where partisanship becomes a key factor in elections is in grave danger. Normally, the reputational component of democracy selects for people with a resume of past successes (an excellent predictor of future successes) while elections with significant numbers of undecided voters provide an advantage to people who run tight campaigns – people who are good at nurturing talent and delegating (an excellent skill for governing).

Partisanship short-circuits this process and selects for whoever can whip up partisan crowds most successfully. This is a rather different sort of person! Rabid partisans spurn compromise and ignore everyone outside of their core constituency because those are the tactics that have rewarded them in the past.

Trump was able to win in part because such a large cross-section of the American electorate was willing to look beyond his flaws if it meant that someone from the other party didn’t win.

A large block of swing voters who look critically at politicians’ reputations and refuse to accept iconoclasts is an important safety valve in any democracy.

This model of democracy neatly explains why it isn’t universally successful. In societies with several strong tribal or religious identities, democracy results in cronyism dominated by the largest tribe/denomination, because it selects for whomever can promise the most to this large block. In countries that don’t have adequate cultural safeguards against corruption, corruption does not ruin reputations and democracy does nothing to squash it.

Democracy isn’t a panacea, but in the right cultural circumstances it is superior to any other realistic form of government.

Unfortunately, we can see that democracy is under attack on two fronts in Western nations. First, social media encourages shallow engagement and makes it easy for people to build constituencies around controversial statements. Second, partisanship is deepening in many societies.

I don’t know what specific remedies exist for these trends, but they strike me as two of the most important to reverse if we wish our democratic institutions to continue to provide good government.

If we cannot find a way to fix partisanship and self-promotion within our current system, then the most important political reform we can undertake is to find a system of government that can pick leaders with the right character for governing even under these very difficult circumstances.

[Epistemic status: much more theoretical than most of my writing. To avoid endless digressions, I don’t justify my centrist axioms very often. I’m happy to further discuss anything that strikes anyone as light on evidence in the comments.]

Politics, Quick Fix

A Follow-up on Brexit (or: why tinkering with 200 year old norms can backfire)

Last week I said that I’d been avoiding writing about Brexit because it was neither my monkeys nor my circus. This week, I’ll be eating those words.

I’m a noted enthusiast of the Westminster system of government, yet this week (with Teresa May’s deal failing in parliament and parliament taking control of Brexit proceedings, to uncertain ends) seems to fly in the face of everything good I’ve said about it. That impression is false; the current impasse has been caused entirely by recent ill-conceived British tinkering, not any core problems with the system itself.

As far as I can tell, the current shambles arise from three departures from the core of the Westminster system.

First, we have parliament taking control of the business of parliament in order to hold a set of indicative votes. I don’t have the sort of deep knowledge of British history that is necessary to assess whether this is unprecedented or not, but it is certainly unusual.

The majority in the house that controls the business of the house is, kind of definitionally, the government in a Westminster system. Unlike the American Republican system of government, the Brits don’t really have a notion of “the government” that extends beyond whomever can command the confidence of parliament. To have parliament in some sense (although not the formal one) withdraw that confidence, without forcing a new government to be appointed by the Queen or fresh elections is deeply unusual.

The whole point of the Westminster system is to always have a governing majority for key votes. If that breaks down, then either a new governing majority should arise, or new elections. Otherwise, you can have American-style gridlock.

This odd situation has arisen partially from the Fixed-term Parliaments Act of 2011, which severely limited the circumstances under which a sitting government can fall. Previously, all important legislation doubled as motions of confidence; defeat of any bill as strongly championed by the government as Teresa May’s Brexit bill would have resulted in new elections. Now, a motion of no-confidence (which requires a majority to amend a bill to add it, or for the government to schedule a motion of no confidence in itself) must pass, or 2/3 of the house must vote for an early election. This bar is considerably higher (as no government wants to go to the polls as a result of a no confidence motion), so it is much easier for a government to limp along, even when it lacks a working majority in the House of Commons.

It’s currently not clear what does have a working majority in parliament, although I suppose today’s indicative votes (where MPs will vote on a variety of Brexit proposals) will give us an idea.

Unfortunately, even if there’s a clear outcome from the indicative votes (and there’s no guarantee of that), there’s not a mechanism for enacting that. Either parliament will have to keep passing amendments every single day to take control of business from the government (which is supposed to be the entity setting business!), or the government has to buy into the outcome. If neither of those happen, the indicative votes will do nothing but encourage intransigence of those who know they have the support of many other MPs. If the rebels went to the Queen and asked to appoint a new government, this would obviously not be an issue, but MPs seem uninterested in taking that (arguably proper) step.

This all stems from the second problem, namely, that parliament is rubbish when constrained by external forces.

The way that parliament normally works is: people come up with a platform and try and get elected on it. If a majority comes from this process, then they implement the platform. They all signed off on it, after all. If there’s no clear majority, then people come up with a coalition agreement, which combines the platforms of multiple parties into some unholy mess that they can all agree to pass. In either case, the government agenda is clear.

The problem here is that there are people in each party on either side of the Brexit referendum. Some of them feel bound by the referendum results and some don’t, but even though its results were incorporated into party platforms, it still feels like a live issue to many MPs in a way that most issues in their platform just don’t.

It’s not even clear that there’s a majority of people in parliament in favour of Brexit. And when you have a government that feels bound by a promise to enact Brexit, but a parliament without a clear majority for any particular deal (or even a majority in favour of Brexit) you’re in for a bad time.

Basically “enact this referendum” and “keep 50% of the house happy” are two different goals and it is very easy to find them mutually incompatible. At this point, it becomes incredibly difficult to govern!

The third problem is Teresa May’s unwillingness to find another deal for the house. I get that there might not be any willingness in Europe to negotiate another deal and that she’s bound by a lot of domestic constraints, but there’s a longstanding tradition that MPs can’t vote on the same bill twice in one parliament. Australia is a rare Westminster system government that allows it, but only for bills that the senate rejects and with the caveat that a second rejection can be used to trigger an election.

This tradition exists so that the government can’t deadlock itself trying to get contentious legislation though. By ignoring it, Teresa May is showing contempt for parliament.

If, instead of standing by her bill after it had failed, she sought out some other bill that could get through parliament, she’d obviate the need for parliament to take matters into its own hands. Alternatively, if the Brexit vote had just been a confidence vote in the first place, she’d be able to ask the question of a brand-new parliament, which, if she headed it, presumably would have a popular mandate for her bill.

(And obviously if she didn’t head parliament, we wouldn’t have this particular impasse.)

By ignoring and changing so many parliamentary conventions, the UK has stripped itself of its protections from deadlock, dooming us all to this seemingly endless Brexit Purgatory. At the time of writing, the prediction market PredictIt had the odds of Brexit at less than 2% by Friday and only 50/50 by May 22. May’s own chances are even worse, with only 43% of PredictIt users confident she would still be PM by the start of July.

I hope that parliament comes to its senses and that this is the last thing I’ll feel compelled to write about Brexit. Unfortunately, I doubt that will be the case.

Model, Politics, Quick Fix

The Fifty Percent Problem

Brexit was always destined to be a shambles.

I haven’t written much about Brexit. It’s always been a bit of a case of “not my monkeys, not my circus”. And we’ve had plenty of circuses on this side of the Atlantic for me to write about.

That said, I do think Brexit is useful for illustrating the pitfalls of this sort of referendum, something I’ve taken to calling “The 50% Problem”.

To see where this problem arises from, let’s take a look at the text of several political referendums:

Should the United Kingdom remain a member of the European Union or leave the European Union? – 2016 UK Brexit Referendum

Do you agree that Québec should become sovereign after having made a formal offer to Canada for a new economic and political partnership within the scope of the bill respecting the future of Quebec and of the agreement signed on June 12, 1995? – 1995 Québec Independence Referendum

Should Scotland be an independent country? – 2014 Scottish Independence Referendum

Do you want Catalonia to become an independent state in the form of a republic? – 2017 Catalonia Independence Referendum, declared illegal by Spain.

What do all of these questions have in common?

Simple: the outcome is much vaguer than the status quo.

During the Brexit campaign, the Leave side promised people everything but the moon. During the run-up to Québec’s last independence referendum, there were promises from the sovereignist camp that Québec would be able to retain the Canadian dollar, join NAFTA without a problem, or perhaps even remain in Canada with more autonomy. In Scotland, leave campaigners promised that Scotland would be able to quickly join the EU (which in a pre-Brexit world, Spain seemed likely to veto). The proponents of the Catalonian referendum pretended Spain would take it at all seriously.

The problem with all of these referendums and their vague questions is that everyone ends up with a slightly different idea of what success will entail. While failure leads to the status quo, success could mean anything from (to use Brexit as an example) £350m/week for the NIH to Britain becoming a hermit kingdom with little external trade.

Some of this comes from assorted demagogues promising more than they can deliver. The rest of it comes from general disagreement among members of any coalition about what exactly their best-case outcome is.

Crucially, this means that getting 50% of the population to agree to a referendum does not guarantee that 50% of the population agrees on what happens next. In fact, getting barely 50% of people to agree practically guarantees that no one will agree on what happens next.

Take Brexit, the only one of the referendums I listed above that actually led to anything. While 51.9% of the UK agreed to Brexit, there is not a majority for any single actual Brexit proposal. This means that it is literally impossible to find a Brexit proposal that polls well. Anything that gets proposed is guaranteed to be opposed by all the Remainers, plus whatever percentage of the Brexiteers don’t agree with that specific form of Brexit. With only 52% of the population backing Leave, the defection of even 4% of the Brexit coalition is enough to make a proposal opposed by the majority of the citizenry of the UK.

This leads to a classic case of circular preferences. Brexit is preferred to Remain, but Remain is preferred to any specific instance of Brexit.

For governing, this is an utter disaster. You can’t run a country when no one can agree on what needs to be done, but these circular preferences guarantee that anything that is tried is deeply unpopular. This is difficult for politicians, who don’t want to be voted out of office for picking wrong, but also don’t want to go back on the referendum.

There are two ways to avoid this failure mode of referendums.

The first is to finish all negotiations before using a referendum to ratify an agreement. This allows people to choose between two specific states of the world: the status quo and a negotiated agreement. It guarantees that whatever wins the referendum has majority support.

This is the strategy Canada took for the Charlottetown Accord (resulting in it failing at referendum without generating years of uncertainty) and the UK and Ireland took for the Good Friday Agreement (resulting in a successful referendum and an end to the Troubles).

The second means of avoiding the 50% problem is to use a higher threshold for success than 50% + 1. Requiring 60% or 66% of people to approve a referendum ensures that any specific proposal after the referendum is completed should have majority support.

This is likely how any future referendum on Québec’s independence will be decided, acknowledging the reality that many sovereignist don’t want full independence, but might vote for it as a negotiating tactic. Requiring a supermajority would prevent Québec from falling into the same pit the UK is currently in.

As the first successful major referendum in a developed country in quite some time, Brexit has demonstrated clearly the danger of referendums decided so narrowly. Hopefully other countries sit up and take notice before condemning their own nation to the sort of paralysis that has gripped Britain for the past three years.

Economics, Quick Fix

The First-Time Home Buyer Incentive is a Disaster

The 2019 Budget introduced by the Liberal government includes one of the worst policies I’ve ever seen.

The CMHC First-Time Home Buyer Incentive provides up to 10% of the purchase price of a house (5% for existing homes, 10% for new homes) to any household buying a home for the first time with an annual income up to $120,000. To qualify, the total mortgage must be less than four times the household’s yearly income and the mortgage must be insured, which means that any house costing more than $590,000 [1] is ineligible for this program. The government will recoup its 5-10% stake when the home is sold.

The cap on eligible house price is this program’s only saving grace. Everything else about it is awful.

Now I want to be clear: housing affordability is a problem, especially in urban areas. Housing costs are increasing above inflation in Canada (by about 7.5% since 2002) and many young people are finding that it is much more difficult for them to buy homes than it was for their parents and grandparents. Rising housing costs are swelling the suburbs, encouraging driving, and making the transition to a low carbon economy harder. Something needs to be done about housing affordability.

This plan is not that “something”.

This plan, like many other aspects of our society, is predicated on the idea that housing should be a “good investment”. There’s just one problem with that: for something to be a “good investment”, it must rise in price more quickly than inflation. Therefore, it is impossible for housing to be simultaneously a good investment and affordable, at least in the long term. If housing is a good investment now, it will be unaffordable for the next generation. And so on.

I’m not even sure this incentive will help anyone in the short term though, because with constrained housing supply (as it is in urban areas, where zoning prevents much new housing from being built), housing costs are determined based on what people can afford. As long as there are more people that would like to live in a city than houses for them to live in, people are in competition for the limited supply of housing. If you were willing to spend some amount of your salary on a house before this incentive, you can just afford to pay more money after the incentive. You don’t end up any better off as the money is passed on to someone else. Really, this benefit is a regressive transfer of money to already-wealthy homeowners, or a subsidy to the construction industry.

The worst part is that buying a house at an inflated valuation isn’t even irrational! The fact of the matter is that as long as everyone knows that governments at all levels are committed to maintaining the status quo – where housing prices cannot be allowed to drop – the longer housing costs will continue to rise. Why shouldn’t anyone who can afford to stick all their savings into a home do so, when they know it’s the only investment they can make that the government will protect from failing [2]?

That’s what’s truly pernicious about this plan: it locks up government money in a speculative bet on housing. Any future decline in housing costs won’t just hurt homeowners. With this incentive, it will hurt the government too [3]. This gives the federal government a strong incentive to keep housing prices high (read: unaffordable), even after some inevitable future round of austerity removes this credit. This is the opposite of what we want the federal government to be doing!

The only path towards broadly affordable housing prices is the removal of all implicit and explicit subsidies, an action that will make it clear that housing prices won’t keep rising (which will have the added benefit of ending speculation on houses, another source of unaffordability). This wouldn’t just mean scaling back policies like this one; it means that we need to get serious about zoning reform and adopt a policy like the one that has kept housing prices in Tokyo stable. Our current style of zoning is broken and accounts for an increasing percentage of housing prices in urban areas.

Zoning began as a way to enforce racial segregation. Today, it enforces not just racial, but financial segregation, forcing immigrants, the young, and everyone else who isn’t well off towards the peripheries of our cities and our societies.

Serious work towards housing affordability would strike back against zoning. This incentive provides a temporary palliative without addressing the root cause, while tying the government’s financial wellbeing to high home prices. Everyone struggling with housing affordability deserves better.


[1] Mortgage insurance is required for any down payment less than 20%. If you have an income of $120,000 and you max out the down payment, then the mortgage of $480,000 would be about 81% of the total price. Division tells us the total price in this case would be $592,592.59, although obviously few people will be positioned to max out the benefit. ^

[2] Currently, the best argument against buying a home is the chance that the government will one day wake up to the crisis it is creating and withdraw some of its subsidies. It is, in general, not wise to make heavily leveraged bets that will only pay off if subsidies are left in place, but a bet on housing has so far been an exception to this rule. ^

[3] Technically, it will hurt the Canadian Mortgage and Housing Corporation, but given that this is the crown corporation responsible for mortgage insurance, a decline in home prices could leave it undercapitalized to the point where the government has to step in even before this policy was enacted. With this policy, a bailout in response to lower home prices seems even more likely. ^

Link Post

Link Post – February 2019

Shinzō Abe has made increasing the participation of women in the workforce one of the key planks in his economic recovery plan. This is complicated by the frankly bonkers amount of work that women have to do as soon as they have kids in Japan – work men often cannot help with because they are expected to be in the office for 16 hours at a time. In addition to the normal tasks parents in North America expect (cooking, cleaning, etc.), parents in Japan have to do things like launder the linens their children use at school, fill out exhaustive diaries documenting everything their children do at home, and sign off on every piece of homework. I sometimes feel like someone needs to hijack the public-address system in Japan and play “work smart not hard” on repeat for as long as it takes for the message to sink in.

A brand new Norwegian Air 737 Max 8 had to make an emergency landing in Iran right before US sanctions were reimposed. It’s been trapped in Iran ever since, because Norwegian Air needs a special state department waiver to import replacement parts into Iran (aircraft parts are covered by the sanctions) and the state department, like most of the US government, just spent a month shut down.

Atul Gawande just tweeted out some fascinating information about mortality in Massachusetts. In the graphs, you can see Spanish Flu and HIV/AIDs (causing above trend deaths in 1918 and 1985 to 2002 respectively), as well as the recent upswing in opioid poisoning deaths (classified as injuries). Opioid poisonings seem most common among non-Hispanic whites, which has led African-American and (especially) Hispanic life expectancies to surpass white life expectancies. One troubling fact: the mortality rate for people with more than 13 years of education is a full third that of people with only high school or less. This is true across all age cohorts.

Even if you use no Google apps, your devices will communicate with Google something like 100,000 times a week, complicating any effort to cut the technology giant out of your life.

The Council of Economic Advisors is effective because it has no official power. This means it ends up staffed by people really passionate about economics, instead of people passionate about political power. Economists – even economists who disagree with each other – tend to hold pretty similar positions on major issues (see, for example, the paucity of economists willing to support tariffs or occupational licensing, two popular policies), so they can present a united lobbying front and occasionally persuade presidents to favour policies that make more economic sense.

Rich people don’t always have time to go pick up their yachts. When they don’t some lucky volunteer crew gets all their expenses paid as they sail it to the owner.

Death rates are mostly going down (except for the aforementioned opioid poisonings) but one other notable exception is car-related fatalities. Experts blame the increase in deaths on SUVs and trucks, which kill a lot of pedestrians. Both have a flat front and are higher off the ground, which results in more of any impact being transferred to the body of pedestrians. SUVs and trucks are where the whole US auto market is going, so we should expect to see deaths continue to rise until self-driving cars are introduced or regulators intervene to force some sort of standards for pedestrian safety (the latter seems unlikely).

Why do trains in the US suck so much? Well part of it is incredibly onerous safety standards, which are far stricter than used anywhere else. Now the Federal Railroad Association is modernizing the rules and bringing them more in line with European regulations, which should result in more economies of scale when purchasing rolling stock (making it cheaper) and lighter rolling stock (which will be cheaper to run). This is a big win for “a small wonky group of urbanist writers and policy experts”.

Ethics, Model, Philosophy

Signing Up For Different Moralities

When it comes to day to day living, many people are in agreement on what is right and what is wrong. Giving change to people who ask for it, shoveling your elderly neighbour’s driveway, and turning off the lights when you’re not in the room: good. Killing, robbing, and drug trafficking: bad. Helping the police to convict mobsters who kill, steal, and traffic drugs: good.

While many moral debates can get complicated, this one rarely does. Even when helping the police involves turning on your compatriots – “snitching” – many people (although notably not the President of the United States of America) think the practice is a net good. There’s a recent case in Australia where opinion has been rather more split. Why? Well, the informant was a lawyer – specifically, a lawyer who had worked with the accused parties. Here’s a sampling of commenters on both sides:

In this case I feel it is for the greater good that human garbage like Mokbel are convicted even if the system has to be bent to do so. [1]
The job requires strict adherence to the ethical rules. If you let your dog run the house, the house gets torn apart.
The brave lady in question went above and beyond to keep Victorians safer. If these thugs are released or sentences reduced there will be uproar.
The right to an open and fair trial is a hallmark of a democratic country even if sometimes a defendant who is in fact guilty gets acquitted.
While I’m normally happy to see violent mobsters go to jail, here I must disagree with everyone who offered support for the lawyer. I think it was wrong of her to inform on her clients and correct for the high court to rebuke the police in the strongest possible terms. I certainly don’t want any of those mobsters back on the street and I hope there’s enough other evidence that none of them have to be released.

But even if some of them do end up winning their appeals, I believe we are better off in a society where lawyers cannot inform on their clients. This, I think, is one of the ethical cases where precedent utilitarianism is particularly useful in analysis and one that demonstrates its strengths as a moral philosophy.

(To briefly recap: precedent utilitarianism is the strain of utilitarian thought that emphasizes the moral weight of precedents. Precedent utilitarians don’t just consider the first-order effects of their actions on global wellbeing. They also consider what precedents their actions create and how those precedents can be later used for others for good or ill.)

The common law legal system is premised on the belief that the burden of proof of crime rests upon the state. If the state wishes to take away someone’s liberty, it must prove to a jury that the person committed the crime. The accused is supposed to be vigourously defended by an advocate – a lawyer or barrister – who has a legal and professional duty to defend their client to the best of their abilities.

We place the burden of proof on the government because we acknowledge that the government can be flawed. To give into every demand it makes leads to tyranny. Only by forcing it to justify all of its actions can we ensure freedom for anyone.

(This sounds very pretty when laid out like this. In practice, we are rather less good at holding the government to account than many, including myself, would like. Especially when the defendant isn’t white. I believe part of why society fails to live up to its duty to hold the government to account is sympathies that commonly lie with police and against defendants, the very sympathies I’m arguing against holding too strongly.)

But it’s not just upon the government that we place a burden to avoid pre-judging. We require advocates to defend their clients to the best of their abilities because we are skeptical of them as well. If we let attorneys decide who deserves defending, then we have just shifted the tyranny. Attorneys can make snap judgements that aren’t borne out by the facts. They can be racist. They can be sexist. They can make mistakes. It’s only by forcing them to defend everyone, regardless of perceived innocence or guilt, that we can truly make the state do its duty.

This doesn’t mean that lawyers always have to go to trial and defend their clients in front of a judge and a jury. It could be that the best thing for a client is a guilty plea (ideally if they are actually guilty, although that’s also not how things currently work, especially when the accused isn’t white). If a lawyer truly believes in a legal strategy (like a guilty plea) and the client refuses to listen, the attorney always can walk away and leave the trial defense to another lawyer. The important thing is that someone must defend the accused and that that someone will be ethically bound to give it their best damn shot.

Many people don’t like this. It is obviously best if every guilty person is punished in accordance with their crime. Some people trust the government to the point where they view every accused as essential guilty. To them, lawyers are scum who defend criminals and prevent them from being justly punished.

I view things differently. I view lawyers as people who have signed up for an alternative morality. While conventional morality holds that we should punish criminals, lawyers have signed up to defend all of their clients, even criminals, and to do their best to prevent that punishment. This is very different from the rest of us!

But it’s complimentary to my (our?) morality. It is not only best if we appropriately punish those who break the law. I believe is also best if we do it without punishing anyone who is innocent.

We cannot ask lawyers to talk to their clients, figure out if they’re innocent or guilty, and then inform the judge or dump as clients all of the truly guilty. This will only work for a short while. Then everyone will figure out that you have to lie to your attorney (or tell the truth if you’re innocent) if you want to avoid jail. We’re now stuck trusting the judgement of attorneys as to who is lying and who is telling the truth – judgement that could be tainted by any number of mistakes or prejudices.

In the Australian case, the attorney made a decision she wasn’t qualified to make. She, not a jury, decided her client was guilty. She doesn’t appear to be wrong here (although really, how can we tell, given that a lot of the information used in the convictions came from her and her erstwhile clients weren’t able to cross-examine her testimony) but if we don’t want a system where a random lawyer gets to decide who is guilty or not, the important thing isn’t that her testimony is true. The important thing is that she arrogated power that wasn’t hers and thereby undermined the justice system. If we let things like this stand, we enable tyranny.

The next lawyer might not be telling the truth. He may just be biased against black clients and want to feel like a hero. Or she might be locked in a payment dispute and angry with her client. We don’t know. And that should scare us away from allowing this precedent to stand. A harsh rebuke here means that the police will be unable to use any future testimony from lawyers and protects everyone in Australia from arbitrary imprisonment based on the decisions of their lawyer.

Focusing on the precedents that actions set is important. If you don’t and instead focus solely on each issue in isolation, you can miss the slow erosion of the rights and freedoms that we all rely on (or desire). Its suitability for this sort of analysis is what makes precedent utilitarianism so appealing to me. It urges us to dig deeper and try to understand why society is set up the way it is.

I think alternative moralities, actively different moral systems that people sign up for as part of their professions are an important model to hold for precedent utilitarians. Alternative moralities encode good precedents, even if they stand in opposition to commonly held values.

We don’t just see this among lawyers. CEOs sign up for the alternative morality of fiduciary duty, which requires them to put the interests of their investors above everything but the law. Complaints about the downsides of this ignore the fact we need companies to grow and profit if we ever want to retire [2]. Engineers sign up for an alternative, stricter morality, which holds them personally and professionally responsible for the failures of any device or structure they sign off on.

Having alternative moralities around makes public morality more complicated. It becomes harder to agree on what is right or wrong; it might be right for a lawyer to help a criminal in a way that it would be wrong for anyone else, or wrong for an engineer to make a mistake in a way that would carry no moral blame for anyone outside of the profession. These alternative moralities require us to do a deeper analysis before judging and reward us with a stronger, more resilient society when we do.


[1] Even though I disagree strenuously with this poster, I have a bit of fondness for their comment. My very first serious essay – and my interest in moral philosophy – was inspired by a similar comment. ^

[2] This isn’t just a capitalism thing. Retirement really just looks like delay some consumption now in order to be able to consume more in retirement. Consumption, time value of [goods and services, money], and growth follow the same math whether you have central planning or free markets. Communists have to figure out how to do retirement as well and they’re faced with the prospect of either providing less for retired people, or using tactics that would make American CEOs blush in order to drive the sort of growth necessary to support an aging retired population. ^