History, Quick Fix

Weather today fine but high waves

The Battle of the Tsushima Straits is the most underrated moment of historical importance in the 20th century.

We’ve all heard lots of different explanations for the start of the First World War. The standard ones are as follows: Europe was a mess of alliances, imperial powers treated war like a game, and one unlucky arch-duke got offed by anarchists.

Less commonly mentioned is Russia’s lack of international prestige, a situation that made it desperate for military victories at the same time it made the Central Powers contemptuous of Russia’s strength.

Russia was the first country to mobilize in 1914 (with its “period preparatory to war”) after Austria issued an ultimatum to Serbia and it was arguably this mobilization that set the stage for a continent spanning war.

Why was Russia so desperate and the Central Powers so unworried?

Well, over 24 hours on May 27/28th, 1905, Russia went from the 3rd most powerful naval nation in the world to one that could have barely hoped to defeat the Austro-Hungarian Empire at sea (that doesn’t sound bad, until you remember that Austria-Hungary has no blue water harbours and never really had any overseas colonies). This wrecked Russian prestige.

What destroyed the Russian fleet so thoroughly?

Admiral Tōgō and the Imperial Japanese fleet.

In the Battle of the Tsushima Straits, Admiral Tōgō defeated and sunk or captured eleven battleships and twenty-seven other ships – practically every Russian naval vessel – at the cost of three torpedo boats (the smallest and cheapest ships used in early 20th century naval combat).

This lopsided victory was the first time a European power was conclusively beaten by an Asian one in an even battle since the Mongol general Subutai razed Hungary and smashed the armies of Poland in the 1200s.

Victory galvanized Japan. Barely fifty years before the battle, Japan had been forced open at gunpoint by Commadore Perry’s Black Ships. Shortly after this, western powers forced Japan, like China before it, to sign unequal treaties. Victory at the Battle of Tsushima showed that this era was clearly over. Japan was now a great power.

This is why I could claim that the Battle of the Tsushima Straits is the most underrated moment of historical importance in the 20th century. Not only did Russia’s defeat sow some of the seeds of the First World War; Japan’s victory also set the stage for Japan’s participation in the Second World War.

Admiral Tōgō’s message to Tokyo on the day of the battle, “In response to the warning that enemy ships have been sighted, the Combined Fleet will immediately commence action and attempt to attack and destroy them. Weather today fine but high waves.”, especially the last part, became as important to the Japanese Navy as Nelson’s remarks before Trafalgar (“England expects that every man will do his duty”) were to the British.

With such a lopsided victory under their belt, the Imperial Japanese Navy began to believe that they were invincible. They quickly became promoters of militarism and conquest.

As America began to act to check Japanese dominance in the Pacific and prevent Japan from entirely colonizing China, the Japanese Navy decided that America had to be defeated. This led to Japan taking Germany’s side in the Second World War, to Pearl Harbour, and eventually to the American occupation of Japan.

Had the Battle of the Tsushima Strait instead been a bloody stalemate, Japan may have risen less quickly and more cautiously. Russia may not have started the First World War when it did, nor succumbed to a revolution when exhausted by the same war. The Soviet Union may never have risen. Both World Wars may have happened differently, or not at all.

This is not even to mention that British naval observers at the battle used what they learned in the construction of Dreadnaught, the battleship that started a new naval arms race.

There’s too much that spilled from all of these events to predict if the world would be better or worse if Tōgō hadn’t won in 1905, but it certainly would have been different.

Today is a good day to reflect on how this single battle, the only decisive time battleships ever met in anger, helped to shape so much of the modern world. If this single moment, unknown to so many, shaped so much of what came later, what other key moments are we ignorant of? What other desperate struggles and last second decisions shaped this baffling world of ours?

History doesn’t just belong to the victors. It belongs to those who are remembered. Today, I’d like to remind you that even if events fall from history and aren’t remembered, they can still shape it.

Economics, Politics, Quick Fix

Against Degrowth

Degrowth is the political platform that holds our current economic growth as unsustainable and advocates for a radical reduction in our resource consumption. Critically, it rejects that this reduction can occur at the same time as our GDP continues to grow. Degrowth, per its backers, requires an actual contraction of the economy.

The Canadian New Democratic Party came perilously close to being taken over by advocates of degrowth during its last leadership race, which goes to show just how much leftist support the movement has gained since its debut in 2008.

I believe that degrowth is one of the least sensible policies being advocated for by elements of the modern left. This post collects my three main arguments against degrowth in a package that is easy to link to in other online discussions.

To my mind, advocates of degrowth fail to advocate a positive vision of transition to a less environmentally intensive economy. North America is already experiencing a resurgence in forest cover, land devoted to agriculture worldwide has been stable for the past 15 years (and will probably begin to decline by 2050), as arable land use per person continues to decrease. In Canada, CO2 emissions per capita peaked in 1979, forty years ago. Total CO2 emissions peaked in 2008 and CO2 emissions per $ of GDP have been continuously falling since 1990.

All of this is evidence of an economy slowly shifting away from stuff. For an economy to grow as people turn away from stuff, they have to consume something else, for consumers often means services and experiences. Instead of degrowth, I think we should accelerate this process.

It is very possible to have GDP growth while rapidly decarbonizing an economy. This simply looks like people shifting their consumption from things (e.g. cars, big houses) towards experiences (locally sourced dinners, mountain biking their local trails). We can accelerate this switch by “internalizing the externality” that carbon presents, which is a fancy way of saying “imposing a tax on carbon”. Global warming is bad and when we actually make people pay that cost as part of the price tag for what they consume, they switch their consumption habits. Higher gas prices, for example, tend to push consumers away from SUVs.

A responsible decarbonisation push emphasises and supports growth in local service industries to make up for the loss of jobs in manufacturing and resource extraction. There’s a lot going for these jobs too; many of them give much more autonomy than manufacturing jobs (a strong determinant of job satisfaction) and they are, by their nature, rooted in local communities and hard to outsource.

(There are, of course, also many new jobs in clean energy that a decarbonizing and de-intensifying economy will create).

If, instead of pushing the economy towards a shift in how money is spent, you are pushing for an overall reduction in GDP, you are advocating for a decrease in industrial production without replacing it with anything. This is code for “decreasing standards of living”, or more succinctly, “a recession”. That is, after all, what we call a period of falling GDP.

This, I think is the biggest problem with advocating degrowth. Voters are liable to punish governments even for recessions that aren’t their fault. If a government deliberately causes a recession, the backlash will be fierce. It seems likely there is no way to continue the process of degrowth by democratic means once it is started.

This leaves two bad options: give over the reins of power to a government that will be reflexively committed to opposing environmentalists, or seize power by force. I hope that it is clear that both of these outcomes to a degrowth agenda would be disastrous.

Advocates of degrowth call my suggestions unrealistic, or outside of historical patterns. But this is clearly not the case; I’ve cited extensive historical data that shows an ongoing trend towards decarbonisation and de-intensification, both in North America and around the world. What is more unrealistic: to believe that the government can intensify an existing trend, or to believe that a government could be elected on a platform of triggering a recession? If anyone is guilty of pie-in-the-sky thinking here, it is not me.

Degrowth steals activist energy from sensible, effective policy positions (like a tax on carbon) that are politically attainable and likely to lead to a prosperous economy. Degrowth, as a policy, is especially easy for conservatives to dismiss and unwittingly aids them in their attempts to create a false dichotomy between environmental protection and a thriving economy.

It’s for these three reasons (the possibility of building thriving low carbon economies, the democratic problem, and the false dichotomy degrowth sets up) that I believe reasonable people have a strong responsibility to argue against degrowth, whenever it is advocated.

(For a positive alternative to degrowth, I personally recommend ecomoderism, but there are several good alternatives.)

Model, Politics, Quick Fix

The Nixon Problem

Richard Nixon would likely have gone down in history as one of America’s greatest presidents, if not for Watergate.

To my mind, his greatest successes were détente with China and the end of the convertibility of dollars into gold, but he also deserves kudos for ending the war in Vietnam, continuing the process of desegregation, establishing the EPA, and signing the anti-ballistic missile treaty.

Nixon was willing to try unconventional solutions and shake things up. He wasn’t satisfied with leaving things as they were. This is, in some sense, a violation of political norms.

When talking about political norms, it’s important to separate them into their two constituent parts.

First, there are the norms of policy. These are the standard terms of the debate. In some countries, they may look like a (semi-)durable centrist consensus. In others they may require accepting single-party rule as a given.

Second are the norms that constrain the behaviour of people within the political system. They may forbid bribery, or self-dealing, or assassinating your political opponents.

I believe that the first set of political norms are somewhat less important than the second. The terms of the debate can be wrong, or stuck in a local maximum, such that no simple tinkering can improve the situation. Having someone willing to change the terms of the debate and try out bold new ideas can be good.

On the other hand, it is rarely good to overturn existing norms of political behaviour. Many of them came about only through decades of careful struggle, as heroic activists have sought to place reasonable constraints on the behaviour of the powerful, lest they rule as tyrants or pillage as oligarchs.

The Nixon problem, as I’ve taken to describing it, is that it’s very, very hard to find a politician who can shake up the political debate without at the same time shaking up our much more important political norms.

Nixon didn’t have to cheat his way to re-election. He won the popular vote by the highest absolute margin ever, some 18 million votes. He carried 49 out of 50 states, losing only Massachusetts.

Now it is true that Nixon used dirty tricks to face McGovern instead of Muskie and perhaps his re-election fight would have been harder against Muskie.

Still, given Muskie’s campaign was so easily derailed by the letter Nixon’s “ratfuckers” forged, it’s unclear how well he would have done in the general election.

And if Muskie was the biggest threat to Nixon, there was no need to bug Watergate after his candidacy had been destroyed. Yet Nixon and his team still ordered this done.

I don’t think it’s possible to get the Nixon who was able to negotiate with China without the Nixon who violated political norms for no reason at all. They were part and parcel with an overriding belief that he knew better than everyone else and that all that mattered was power for himself. Regardless, it is clear from Watergate that his ability to think outside of the current consensus was not something he could just turn off. Nixon is not alone in this.

One could imagine a hypothetical Trump (perhaps a Trump that listened to Peter Thiel more) who engaged mostly in well considered but outside-of-the-political-consensus policies. This Trump would have loosened FDA policies that give big pharma an unfair advantage, ended the mortgage tax deduction, and followed up his pressure on North Korea with some sort of lasting peace deal, rather than ineffective admiration of a monster.

The key realization about this hypothetical Trump is that, other than his particular policy positions, he’d be no different. He’d still idolize authoritarian thugs, threaten to lock up his political opponents, ignore important government departments, and surround himself with frauds and grifters.

I believe that it’s important to think how the features of different governments encourage different people to rise to the top. If a system of government requires any leader to first be a general, then it will be cursed with rigid leaders who expect all orders to be followed to the letter. If it instead rewards lying, then it’ll be cursed with politicians who go back on every promise.

There’s an important corollary to this: if you want a specific person to rule because of something specific about their character, you should not expect them to be able to turn it off.

Justin Trudeau cannot stop with the platitudes, even when backed into a corner. Donald Trump cannot stop lying, even when the truth is known to everyone. Richard Nixon couldn’t stop ignoring the normal way things were done in Washington, even when the normal way existed for a damn good reason.

This, I think, is the biggest mistake people like Peter Thiel made when backing Trump. They saw a lot of problems in Washington and correctly concluded that no one who was steeped in the ways of Washington would correct them. They decided that the only way forward was to find someone brash, who wouldn’t care about how things were normally done.

But they didn’t stop and think how far that attitude would extend.

Whenever someone tells you that a bold outsider is just what a system needs, remember that a Nixon who never did Watergate couldn’t have gone to China. If you back a new Nixon, you better be willing for a reprise.

Model, Philosophy, Quick Fix

Post-modernism and Political Diversity

I was reading a post-modernist critique of capitalist realism – the resignation to capitalism as the only practical way to organize a society, arising out of the failure of the Soviet Union – and I was struck by something interesting about post-modernism.

Insofar as post-modernism stands for anything, it is a critique of ideology. Post-modernism holds that there is no privileged lens with which to view the world; that even empiricism is suspect, because it too has a tendency to reproduce and reify the power structures in which in exists.

A startling thing then, is the sterility of the post-modernist political landscape. It is difficult to imagine a post-modernist who did not vote for Bernie Sanders or Jill Stein. Post-modernism is solely a creature of the left and specifically that part of the left that rejects the centrist compromise beloved of the incrementalist or market left.

There is a fundamental conflict between post-modernism’s self-proclaimed positioning as an ideology without an ideology – the only ideology conscious of its own construction – and its lack of political diversity.

Most other ideologies are tolerant of political divergence. Empiricists are found in practically every political party (with the exception, normally, being those controlled by populists) because empiricism comes with few built in moral commitments and politics is as much about what should be as what is. Devout Catholics also find themselves split among political parties, as they balance the social justice and social order messages of their religion. You will even, I would bet, find more evangelicals in the Democratic party than you will find post-modernists in the Republican party (although perhaps this would just be an artifact of their relative population sizes).

Even neoliberals and economists, the favourite target of post-modernists, find their beliefs cash out to a variety of political positions, from anarcho-capitalism or left-libertarianism to main-street republicanism.

It is hard to square the narrowness of post-modernism’s political commitments with its anti-ideological intellectual commitments. Post-modernism positions itself in communion with the Real, that which “any [constructed, as through empiricism] ‘reality’ must suppress”. Yet the political commitments it makes require us to believe that the Real is in harmony with very few political positions.

If this were the actual position of post-modernism, then it would be vulnerable to a post-modernist critique. Why should a narrow group of relatively privileged academics in relatively privileged societies have a monopoly on the correct means of political organization? Certainly, if economics professors banded together to claim they had discovered the only means of political organization and the only allowable set of political beliefs, post-modernists would be ready with that spiel. Why then, should they be exempt?

If post-modernism instead does not believe it has found a deeper Real, then it must grapple with its narrow political attractions. Why should we view it as anything but a justification for a certain set of policy proposals, popular among its members but not necessarily elsewhere?

I believe there is value in understanding that knowledge is socially constructed, but I think post-modernism, by denying any underlying physical reality (in favour of a metaphysical Real) removes itself from any sort of feedback loop that could check its own impulses (contrast: empiricism). And so, things that are merely fashionable among its adherents become de facto part of its ideology. This is troubling, because the very virtue of post-modernism is supposed to be its ability to introspect and examine the construction of ideology.

This paucity of political diversity makes me inherently skeptical of any post-modernist identified Real. Absent significant political diversity within the ideological movement, it’s impossible to separate an intellectually constructed Real from a set of political beliefs popular among liberal college professors.

And “liberal college professors like it” just isn’t a real political argument.

Politics, Quick Fix

A Follow-up on Brexit (or: why tinkering with 200 year old norms can backfire)

Last week I said that I’d been avoiding writing about Brexit because it was neither my monkeys nor my circus. This week, I’ll be eating those words.

I’m a noted enthusiast of the Westminster system of government, yet this week (with Teresa May’s deal failing in parliament and parliament taking control of Brexit proceedings, to uncertain ends) seems to fly in the face of everything good I’ve said about it. That impression is false; the current impasse has been caused entirely by recent ill-conceived British tinkering, not any core problems with the system itself.

As far as I can tell, the current shambles arise from three departures from the core of the Westminster system.

First, we have parliament taking control of the business of parliament in order to hold a set of indicative votes. I don’t have the sort of deep knowledge of British history that is necessary to assess whether this is unprecedented or not, but it is certainly unusual.

The majority in the house that controls the business of the house is, kind of definitionally, the government in a Westminster system. Unlike the American Republican system of government, the Brits don’t really have a notion of “the government” that extends beyond whomever can command the confidence of parliament. To have parliament in some sense (although not the formal one) withdraw that confidence, without forcing a new government to be appointed by the Queen or fresh elections is deeply unusual.

The whole point of the Westminster system is to always have a governing majority for key votes. If that breaks down, then either a new governing majority should arise, or new elections. Otherwise, you can have American-style gridlock.

This odd situation has arisen partially from the Fixed-term Parliaments Act of 2011, which severely limited the circumstances under which a sitting government can fall. Previously, all important legislation doubled as motions of confidence; defeat of any bill as strongly championed by the government as Teresa May’s Brexit bill would have resulted in new elections. Now, a motion of no-confidence (which requires a majority to amend a bill to add it, or for the government to schedule a motion of no confidence in itself) must pass, or 2/3 of the house must vote for an early election. This bar is considerably higher (as no government wants to go to the polls as a result of a no confidence motion), so it is much easier for a government to limp along, even when it lacks a working majority in the House of Commons.

It’s currently not clear what does have a working majority in parliament, although I suppose today’s indicative votes (where MPs will vote on a variety of Brexit proposals) will give us an idea.

Unfortunately, even if there’s a clear outcome from the indicative votes (and there’s no guarantee of that), there’s not a mechanism for enacting that. Either parliament will have to keep passing amendments every single day to take control of business from the government (which is supposed to be the entity setting business!), or the government has to buy into the outcome. If neither of those happen, the indicative votes will do nothing but encourage intransigence of those who know they have the support of many other MPs. If the rebels went to the Queen and asked to appoint a new government, this would obviously not be an issue, but MPs seem uninterested in taking that (arguably proper) step.

This all stems from the second problem, namely, that parliament is rubbish when constrained by external forces.

The way that parliament normally works is: people come up with a platform and try and get elected on it. If a majority comes from this process, then they implement the platform. They all signed off on it, after all. If there’s no clear majority, then people come up with a coalition agreement, which combines the platforms of multiple parties into some unholy mess that they can all agree to pass. In either case, the government agenda is clear.

The problem here is that there are people in each party on either side of the Brexit referendum. Some of them feel bound by the referendum results and some don’t, but even though its results were incorporated into party platforms, it still feels like a live issue to many MPs in a way that most issues in their platform just don’t.

It’s not even clear that there’s a majority of people in parliament in favour of Brexit. And when you have a government that feels bound by a promise to enact Brexit, but a parliament without a clear majority for any particular deal (or even a majority in favour of Brexit) you’re in for a bad time.

Basically “enact this referendum” and “keep 50% of the house happy” are two different goals and it is very easy to find them mutually incompatible. At this point, it becomes incredibly difficult to govern!

The third problem is Teresa May’s unwillingness to find another deal for the house. I get that there might not be any willingness in Europe to negotiate another deal and that she’s bound by a lot of domestic constraints, but there’s a longstanding tradition that MPs can’t vote on the same bill twice in one parliament. Australia is a rare Westminster system government that allows it, but only for bills that the senate rejects and with the caveat that a second rejection can be used to trigger an election.

This tradition exists so that the government can’t deadlock itself trying to get contentious legislation though. By ignoring it, Teresa May is showing contempt for parliament.

If, instead of standing by her bill after it had failed, she sought out some other bill that could get through parliament, she’d obviate the need for parliament to take matters into its own hands. Alternatively, if the Brexit vote had just been a confidence vote in the first place, she’d be able to ask the question of a brand-new parliament, which, if she headed it, presumably would have a popular mandate for her bill.

(And obviously if she didn’t head parliament, we wouldn’t have this particular impasse.)

By ignoring and changing so many parliamentary conventions, the UK has stripped itself of its protections from deadlock, dooming us all to this seemingly endless Brexit Purgatory. At the time of writing, the prediction market PredictIt had the odds of Brexit at less than 2% by Friday and only 50/50 by May 22. May’s own chances are even worse, with only 43% of PredictIt users confident she would still be PM by the start of July.

I hope that parliament comes to its senses and that this is the last thing I’ll feel compelled to write about Brexit. Unfortunately, I doubt that will be the case.

Model, Politics, Quick Fix

The Fifty Percent Problem

Brexit was always destined to be a shambles.

I haven’t written much about Brexit. It’s always been a bit of a case of “not my monkeys, not my circus”. And we’ve had plenty of circuses on this side of the Atlantic for me to write about.

That said, I do think Brexit is useful for illustrating the pitfalls of this sort of referendum, something I’ve taken to calling “The 50% Problem”.

To see where this problem arises from, let’s take a look at the text of several political referendums:

Should the United Kingdom remain a member of the European Union or leave the European Union? – 2016 UK Brexit Referendum

Do you agree that Québec should become sovereign after having made a formal offer to Canada for a new economic and political partnership within the scope of the bill respecting the future of Quebec and of the agreement signed on June 12, 1995? – 1995 Québec Independence Referendum

Should Scotland be an independent country? – 2014 Scottish Independence Referendum

Do you want Catalonia to become an independent state in the form of a republic? – 2017 Catalonia Independence Referendum, declared illegal by Spain.

What do all of these questions have in common?

Simple: the outcome is much vaguer than the status quo.

During the Brexit campaign, the Leave side promised people everything but the moon. During the run-up to Québec’s last independence referendum, there were promises from the sovereignist camp that Québec would be able to retain the Canadian dollar, join NAFTA without a problem, or perhaps even remain in Canada with more autonomy. In Scotland, leave campaigners promised that Scotland would be able to quickly join the EU (which in a pre-Brexit world, Spain seemed likely to veto). The proponents of the Catalonian referendum pretended Spain would take it at all seriously.

The problem with all of these referendums and their vague questions is that everyone ends up with a slightly different idea of what success will entail. While failure leads to the status quo, success could mean anything from (to use Brexit as an example) £350m/week for the NIH to Britain becoming a hermit kingdom with little external trade.

Some of this comes from assorted demagogues promising more than they can deliver. The rest of it comes from general disagreement among members of any coalition about what exactly their best-case outcome is.

Crucially, this means that getting 50% of the population to agree to a referendum does not guarantee that 50% of the population agrees on what happens next. In fact, getting barely 50% of people to agree practically guarantees that no one will agree on what happens next.

Take Brexit, the only one of the referendums I listed above that actually led to anything. While 51.9% of the UK agreed to Brexit, there is not a majority for any single actual Brexit proposal. This means that it is literally impossible to find a Brexit proposal that polls well. Anything that gets proposed is guaranteed to be opposed by all the Remainers, plus whatever percentage of the Brexiteers don’t agree with that specific form of Brexit. With only 52% of the population backing Leave, the defection of even 4% of the Brexit coalition is enough to make a proposal opposed by the majority of the citizenry of the UK.

This leads to a classic case of circular preferences. Brexit is preferred to Remain, but Remain is preferred to any specific instance of Brexit.

For governing, this is an utter disaster. You can’t run a country when no one can agree on what needs to be done, but these circular preferences guarantee that anything that is tried is deeply unpopular. This is difficult for politicians, who don’t want to be voted out of office for picking wrong, but also don’t want to go back on the referendum.

There are two ways to avoid this failure mode of referendums.

The first is to finish all negotiations before using a referendum to ratify an agreement. This allows people to choose between two specific states of the world: the status quo and a negotiated agreement. It guarantees that whatever wins the referendum has majority support.

This is the strategy Canada took for the Charlottetown Accord (resulting in it failing at referendum without generating years of uncertainty) and the UK and Ireland took for the Good Friday Agreement (resulting in a successful referendum and an end to the Troubles).

The second means of avoiding the 50% problem is to use a higher threshold for success than 50% + 1. Requiring 60% or 66% of people to approve a referendum ensures that any specific proposal after the referendum is completed should have majority support.

This is likely how any future referendum on Québec’s independence will be decided, acknowledging the reality that many sovereignist don’t want full independence, but might vote for it as a negotiating tactic. Requiring a supermajority would prevent Québec from falling into the same pit the UK is currently in.

As the first successful major referendum in a developed country in quite some time, Brexit has demonstrated clearly the danger of referendums decided so narrowly. Hopefully other countries sit up and take notice before condemning their own nation to the sort of paralysis that has gripped Britain for the past three years.

Economics, Quick Fix

The First-Time Home Buyer Incentive is a Disaster

The 2019 Budget introduced by the Liberal government includes one of the worst policies I’ve ever seen.

The CMHC First-Time Home Buyer Incentive provides up to 10% of the purchase price of a house (5% for existing homes, 10% for new homes) to any household buying a home for the first time with an annual income up to $120,000. To qualify, the total mortgage must be less than four times the household’s yearly income and the mortgage must be insured, which means that any house costing more than $590,000 [1] is ineligible for this program. The government will recoup its 5-10% stake when the home is sold.

The cap on eligible house price is this program’s only saving grace. Everything else about it is awful.

Now I want to be clear: housing affordability is a problem, especially in urban areas. Housing costs are increasing above inflation in Canada (by about 7.5% since 2002) and many young people are finding that it is much more difficult for them to buy homes than it was for their parents and grandparents. Rising housing costs are swelling the suburbs, encouraging driving, and making the transition to a low carbon economy harder. Something needs to be done about housing affordability.

This plan is not that “something”.

This plan, like many other aspects of our society, is predicated on the idea that housing should be a “good investment”. There’s just one problem with that: for something to be a “good investment”, it must rise in price more quickly than inflation. Therefore, it is impossible for housing to be simultaneously a good investment and affordable, at least in the long term. If housing is a good investment now, it will be unaffordable for the next generation. And so on.

I’m not even sure this incentive will help anyone in the short term though, because with constrained housing supply (as it is in urban areas, where zoning prevents much new housing from being built), housing costs are determined based on what people can afford. As long as there are more people that would like to live in a city than houses for them to live in, people are in competition for the limited supply of housing. If you were willing to spend some amount of your salary on a house before this incentive, you can just afford to pay more money after the incentive. You don’t end up any better off as the money is passed on to someone else. Really, this benefit is a regressive transfer of money to already-wealthy homeowners, or a subsidy to the construction industry.

The worst part is that buying a house at an inflated valuation isn’t even irrational! The fact of the matter is that as long as everyone knows that governments at all levels are committed to maintaining the status quo – where housing prices cannot be allowed to drop – the longer housing costs will continue to rise. Why shouldn’t anyone who can afford to stick all their savings into a home do so, when they know it’s the only investment they can make that the government will protect from failing [2]?

That’s what’s truly pernicious about this plan: it locks up government money in a speculative bet on housing. Any future decline in housing costs won’t just hurt homeowners. With this incentive, it will hurt the government too [3]. This gives the federal government a strong incentive to keep housing prices high (read: unaffordable), even after some inevitable future round of austerity removes this credit. This is the opposite of what we want the federal government to be doing!

The only path towards broadly affordable housing prices is the removal of all implicit and explicit subsidies, an action that will make it clear that housing prices won’t keep rising (which will have the added benefit of ending speculation on houses, another source of unaffordability). This wouldn’t just mean scaling back policies like this one; it means that we need to get serious about zoning reform and adopt a policy like the one that has kept housing prices in Tokyo stable. Our current style of zoning is broken and accounts for an increasing percentage of housing prices in urban areas.

Zoning began as a way to enforce racial segregation. Today, it enforces not just racial, but financial segregation, forcing immigrants, the young, and everyone else who isn’t well off towards the peripheries of our cities and our societies.

Serious work towards housing affordability would strike back against zoning. This incentive provides a temporary palliative without addressing the root cause, while tying the government’s financial wellbeing to high home prices. Everyone struggling with housing affordability deserves better.

Footnotes

[1] Mortgage insurance is required for any down payment less than 20%. If you have an income of $120,000 and you max out the down payment, then the mortgage of $480,000 would be about 81% of the total price. Division tells us the total price in this case would be $592,592.59, although obviously few people will be positioned to max out the benefit. ^

[2] Currently, the best argument against buying a home is the chance that the government will one day wake up to the crisis it is creating and withdraw some of its subsidies. It is, in general, not wise to make heavily leveraged bets that will only pay off if subsidies are left in place, but a bet on housing has so far been an exception to this rule. ^

[3] Technically, it will hurt the Canadian Mortgage and Housing Corporation, but given that this is the crown corporation responsible for mortgage insurance, a decline in home prices could leave it undercapitalized to the point where the government has to step in even before this policy was enacted. With this policy, a bailout in response to lower home prices seems even more likely. ^

Model, Quick Fix

When QALYs Are Wrong – Thoughts on the Gates Foundation

Every year, I check in to see if we’ve eradicated polio or guinea worm yet. Disease eradications are a big deal. We’ve only successfully eradicated one disease – smallpox – so being so close to wiping out two more is very exciting.

Still, when I looked at how much resources were committed to polio eradication (especially by the Gates Foundation), I noticed they seemed incongruent with its effects. No polio eradication effort can be found among GiveWell’s top charities, because it is currently rather expensive to prevent polio. The amount of quality-adjusted life years (QALYs, a common measure of charity effectiveness used in the Effective Altruism community) you can save with a donation to preventing malaria is just higher than for polio.

I briefly wondered if it might not be better for all of the effort going to polio eradication to instead go to anti-malaria programs. After thinking some more, I’ve decided that this would be a grave mistake. Since I haven’t seen why explained anywhere else, I figured I’d share my thinking, so that anyone else having the same thought can see it.

A while back, it was much cheaper to buy QALYs using the polio vaccines. As recently as 1988, there were more than 350,000 cases of polio every year. It’s a testament to the excellent work of the World Health Organization and its partners that polio has become so much rarer – and therefore so much more expensive to prevent each new case of. After all, when there are few new cases, you can’t prevent thousands.

It is obviously very good that there are few cases of polio. If we decided that this was good enough and diverted resources towards treating other diseases, we might quickly find that this would no longer be the case. Polio could once again become a source of easy QALY improvements – because it would be running rampant in unvaccinated populations. When phrased this way, I hope it’s clear that polio becoming a source of cheap QALY improvements isn’t a good thing; the existence of cheap QALY improvements means that we’ve dropped the ball on a potentially stoppable disease.

If polio is eradicated for good, we can stop putting any effort into fighting it. We won’t need any more polio vaccines or any more polio monitoring. It’s for this reason that we’re much better off if we finish the eradication effort.

What I hadn’t realized was that a simple focus on present QALYs obscures the potential effects our actions can have on future QALYs. Abandoning diseases until treatments for them save many lives cheaply might look good for our short term effectiveness, but in the long term, the greatest gains come from following through with our eradication efforts, so that we can repurpose all resources from an eradicated disease to the fight against another, forever.

Falsifiable, Physics, Quick Fix

Pokémon Are Made of Styrofoam

One of the best things about taking physics classes is that the equations you learn are directly applicable to the real world. Every so often, while reading a book or watching a movie, I’m seized by the sudden urge to check it for plausibility. A few scratches on a piece of paper later and I will generally know one way or the other.

One of the most amusing things I’ve found doing this is that the people who come up with the statistics for Pokémon definitely don’t have any sort of education in physics.

Takes Onix. Onix is a rock/ground Pokémon renowned for its large size and sturdiness. Its physical statistics reflect this. It’s 8.8 metres (28′) long and 210kg (463lbs).

Onix, being tough. I don’t own the copyright to this image, but I’m claiming fair use for purpose of criticism. Source.

Surely such a large and tough Pokémon should be very, very dense, right? Density is such an important tactile cue for us. Don’t believe me? Pick up a large piece of solid medal. Its surprising weight will make you take it seriously.

Let’s check if Onix would be taken seriously, shall we? Density is equal to mass divided by volume. We use the symbol ρ to represent density, which gives us the following equation:

We already know Onix’s mass. Now we just need to calculate its volume. Luckily Onix is pretty cylindrical, so we can approximate it with a cylinder. The equation for the volume of a cylinder is pretty simple:

Where π is the ratio between the diameter of a circle and its circumference (approximately 3.1415…, no matter what Indiana says), r is the radius of a circle (always one half the diameter), and h is the height of the cylinder.

Given that we know Onix’s height, we just need its diameter. Luckily the Pokémon TV show gives us a sense of scale.

Here’s a picture of Onix. Note the kid next to it for scale. I don’t own the copyright to this image, but I’m claiming fair use for purpose of criticism. Source.

Judging by the image, Onix probably has an average diameter somewhere around a metre (3 feet for the Americans). This means Onix has a radius of 0.5 metres and a height of 8.8 metres. When we put these into our equation, we get:

For a volume of approximately 6.9m3. To get a comparison I turned to Wolfram Alpha which told me that this is about 40% of the volume of a gray whale or a freight container (which incidentally implies that gray whales are about the size of standard freight containers).

Armed with a volume, we can calculate a density.

Okay, so we know that Onix is 30.4 kg/m3, but what does that mean?

Well it’s currently hard to compare. I’m much more used to seeing densities of sturdy materials expressed in tonnes per cubic metre or grams per cubic centimetre than I am seeing them expressed in kilograms per cubic metre. Luckily, it’s easy to convert between these.

There are 1000 kilograms in a ton. If we divide our density by a thousand we can calculate a new density for Onix of 0.0304t/m3.

How does this fit in with common materials, like wood, Styrofoam, water, stone, and metal?

Material

Density (t/m3)

Styrofoam

0.028

Onix

0.03

Balsa

0.16

Oak [1]

0.65

Water

1

Granite

2.6

Steel

7.9

From this chart, you can see that Onix’s density is eerily close to Styrofoam. Even the notoriously light balsa wood is five times denser than him. Actual rock is about 85 times denser. If Onix was made of granite, it would weigh 18 tonnes, much heavier than even Snorlax (the heaviest of the original Pokémon at 460kg).

While most people wouldn’t be able to pick Onix up (it may not be dense, but it is big), it wouldn’t be impossible to drag it. Picking up part of it would feel disconcertingly light, like picking up an aluminum ladder or carbon fibre bike, only more so.

This picture is unrealistic. Because of its density, no more than 3% of Onix can be below the water. I don’t own the copyright to this image, but I’m claiming fair use for purpose of criticism. Source.

How did the creators of Pokémon accidently bestow one of the most famous of their creations with a hilariously unrealistic density?

I have a pet theory.

I went to school for nanotechnology engineering. One of the most important things we looked into was how equations scaled with size.

Humans are really good at intuiting linear scaling. When something scales linearly, every twofold change in one quantity brings about a twofold change in another. Time and speed scale linearly (albeit inversely). Double your speed and the trip takes half the time. This is so simple that it rarely requires explanation.

Unfortunately for our intuitions, many physical quantities don’t scale linearly. These were the cases that were important for me and my classmates to learn, because until we internalized them, our intuitions were useless on the nanoscale. Many forces, for example, scale such that they become incredibly strong incredibly quickly at small distances. This leads to nanoscale systems exhibiting a stickiness that is hard on our intuitions.

It isn’t just forces that have weird scaling though. Geometry often trips people up too.

In geometry, perimeter is the only quantity I can think of that scales linearly with size. Double the length of the sides of a square and the perimeter doubles. The area, however does not. Area is quadratically related to side length. Double the length of a square and you’ll find the area quadruples. Triple the length and the area increases nine times. Area varies with the square of the length, a property that isn’t just true of squares. The area of a circle is just as tied to the square of its radius as a square is to the square of its length.

Volume is even trickier than radius. It scales with the third power of the size. Double the size of a cube and its volume increases eight-fold. Triple it, and you’ll see 27 times the volume. Volume increases with the cube (which again works for shapes other than cubes) of the length.

If you look at the weights of Pokémon, you’ll see that the ones that are the size of humans have fairly realistic weights. Sandslash is the size of a child (it stands 1m/3′ high) and weighs a fairly reasonable 29.5kg.

(This only works for Pokémon really close to human size. I’d hoped that Snorlax would be about as dense as marshmallows so I could do a fun comparison, but it turns out that marshmallows are four times as dense as Snorlax – despite marshmallows only having a density of ~0.5t/m3)

Beyond these touchstones, you’ll see that the designers of Pokémon increased their weight linearly with size. Onix is a bit more than eight times as long as Sandslash and weighs seven times as much.

Unfortunately for realism, weight is just density times volume and as I just said, volume increases with the cube of length. Onix shouldn’t weigh seven or even eight times as much as Sandslash. At a minimum, its weight should be eight times eight times eight multiples of Sandslash’s; a full 512 times more.

Scaling properties determine how much of the world is arrayed. We see extremely large animals more often in the ocean than in the land because the strength of bones scales with the square of size, while weight scales with the cube. Become too big and you can’t walk without breaking your bones. Become small and people animate kids’ movies about how strong you are. All of this stems from scaling.

These equations aren’t just important to physicists. They’re important to any science fiction or fantasy writer who wants to tell a realistic story.

Or, at least, to anyone who doesn’t want their work picked apart by physicists.

Footnotes

[1] Not the professor. His density is 0.985t/m3. ^

Economics, Politics, Quick Fix

Why Linking The Minimum Wage To Inflation Can Backfire

Last week I explained how poor decisions by central bankers (specifically failing to spur inflation) can make recessions much worse and lead to slower wage growth during recovery.

(Briefly: inflation during recessions reduces the real cost of payroll, cutting business expenses and making firing people unnecessary. During a recovery, it makes hiring new workers cheaper and so leads to more being hired. Because central bankers failed to create inflation during and after the great recession, many businesses are scared of raising salaries. They believe (correctly) that this will increase their payroll expenses to the point where they’ll have to lay many people off if another recession strikes. Until memories of the last recession fade or central bankers clean up their act, we shouldn’t expect wages to rise.)

Now I’d like to expand on an offhand comment I made about the minimum wage last week and explore how it can affect recovery, especially if it’s indexed to inflation.

The minimum wage represents a special case when it comes to pay cuts and layoffs in recessions. While it’s always theoretically possible to convince people to take a pay cut rather than a layoff (although in practice it’s mostly impossible), this option isn’t available for people who make the minimum wage. It’s illegal to pay them anything less. If bad times strike and business is imperiled, people making the minimum wage might have to be laid off.

I say “might”, because when central bankers aren’t proving useless, inflation can rescue people making the minimum wage from being let go. Inflation makes the minimum wage relatively less valuable, which reduces the cost of payroll relative to other inputs and helps to save jobs that pay minimum wage. This should sound familiar, because inflation helps people making the minimum wage in the exact same way it helps everyone else.

Because of increasingly expensive housing and persistently slow wage growth, some jurisdictions are experimenting with indexing the minimum wage to inflation. This means that the minimum wage rises at the same rate as the cost of living. Most notably (to me, at least), this group includes my home province of Ontario.

I think decreasing purchasing power is a serious problem (especially because of its complicated intergenerational dynamics), but I think this is one of the worst possible ways to deal with it.

When the minimum wage is tied to inflation, recessions can become especially dangerous and drawn out.

With the minimum wage rising in lockstep with inflation, any attempts to decrease payroll costs in real terms (that is to say: inflation adjusted terms) is futile to the extent that payroll expenses are for minimum wage workers. Worse, people who were previously making above the minimum wage and might have had their jobs saved by inflation can be swept up by an increasingly high minimum wage.

This puts central bankers in a bind. As soon as the minimum wage is indexed to inflation, inflation is no longer a boon to all workers. Suddenly, many workers can find themselves in a “damned if you do, damned if you don’t” situation. Without inflation, they may be too expensive to keep. With it, they may be saved… until the minimum wage comes for them too. If a recession goes on long enough, only high-income workers would be sparred.

In addition, minimum wage (or near-minimum wage) workers who are laid off during a period of higher inflation (an in this scenario, there will be many) will suffer comparatively more, as their savings get exhausted even more quickly.

Navigating these competing needs would be an especially tough challenge for certain central banks like the US Federal Reserve – those banks that have dual mandates to maintain stable prices and full employment. If a significant portion of the US ever indexes its minimum wage to inflation, the Fed will have no good options.

It is perhaps darkly humorous that central banks, which bear an unusually large parcel of the blame for our current slow wage growth, stand to face the greatest challenges from the policies we’re devising to make up for their past shortcomings. Unfortunately, I think a punishment of this sort is rather like cutting off our collective nose to spite our collective face.

There are simple policies we could enact to counter the risks here. Suspending any peg to inflation during years that contain recessions (in Ontario at least, the minimum wage increase due to inflation is calculated annually) would be a promising start. Wage growth after a recession could be ensured with a rebound clause, or better yet, the central bank actually doing its job properly.

I am worried about the political chances (and popularity once enacted) of any such pragmatic policy though. Many people respond to recessions with the belief that the government can make things better by passing the right legislation – forcing the economy back on track by sheer force of ink. This is rarely the case, especially because the legislation that people have historically clamoured for when unemployment is high is the sort that increases wages, not lowers them. This is a disaster when unemployment threatens because of too-high wages. FDR is remembered positively for his policy of increasing wages during the great depression, even though this disastrous decision strangled the recovery in its crib. I don’t expect any higher degree of economic literacy from people today.

To put my fears more plainly, I worry that politicians, faced with waning popularity and a nipping recession, would find allowing the minimum wage to be frozen too much of a political risk. I frankly don’t trust most politicians to follow through with a freeze, even if it’s direly needed.

Minimum wages are one example of a tradeoff we make between broad access and minimum standards. Do we try and make sure everyone who wants a job can have one, or do we make sure people who have jobs aren’t paid too little for their labour, even if that hurts the unemployed? As long as there’s scarcity, we’re going to have to struggle with how we ensure that as many people as possible have their material needs met and that involves tradeoffs like this one.

Minimum wages are just one way we can do this. Wage subsidies or a Universal Basic Income are both being discussed with increasing frequency these days.

But when we’re making these kind of compassionate decisions, we need to look at the risks of whatever systems we choose. Proponents of indexing the minimum wage to inflation haven’t done a good job of understanding the grave risk it poses to the health of our economy and perhaps most of all, to the very people they seek to help. In places like Ontario, where the minimum wage is already indexed to inflation, we’re going to pay for their lack of foresight next time an economic disaster strikes.