Ethics, Philosophy, Quick Fix

Second Order Effects of Unjust Policies

In some parts of the Brazilian Amazon, indigenous groups still practice infanticide. Children are killed for being disabled, for being twins, or for being born to single mothers. This is undoubtedly a piece of cultural technology that existed to optimize resource distribution under harsh conditions.

Infanticide can be legally practiced because these tribes aren’t bound by Brazilian law. Under legislation, indigenous tribes are bound by the laws in proportion to how much they interact with the state. Remote Amazonian groups have a waiver from all Brazilian laws.

Reformers, led mostly by disabled indigenous people who’ve escaped infanticide and evangelicals, are trying to change this. They are pushing for a law that will outlaw infanticide, register pregnancies and birth outcomes, and punish people who don’t report infanticide.

Now I know that I have in the past written about using the outside view in cases like these. Historically, outsiders deciding they know what is best for indigenous people has not ended particularly well. In general, this argues for avoiding meddling in cases like this. Despite that, if I lived in Brazil, I would support this law.

When thinking about public policies, it’s important to think about the precedents they set. Opposing a policy like this, even when you have very good reasons, sends a message to the vast majority of the population, a population that views infanticide as wrong (and not just wrong, but a special evil). It says: “we don’t care about what is right or wrong, we’re moral relativists who think anything goes if it’s someone’s culture.”

There are several things to unpack here. First, there are the direct effects on the credibility of the people defending infanticide. When you’re advocating for something that most people view as clearly wrong, something so beyond the pale that you have no realistic chance of ever convincing anyone, you’re going to see some resistance to the next issue you take up, even if it isn’t beyond the pale. If the same academics defending infanticide turn around and try and convince people to accept human rights for trans people, they’ll find themselves with limited credibility.

Critically, this doesn’t happen with a cause where it’s actually possible to convince people that you are standing up for what is right. Gay rights campaigners haven’t been cut out of the general cultural conversation. On the contrary, they’ve been able to parlay some of their success and credibility from being ahead of the curve to help in related issues, like trans rights.

There’s no (non-apocalyptic) future where the people of Brazil eventually wake up okay with infanticide and laud the campaigners who stood up for it. But the people of Brazil are likely to wake up in the near future and decide they can’t ever trust the morals of academics who advocated for infanticide.

Second, it’s worth thinking about how people’s experience of justice colours their view of the government. When the government permits what is (to many) a great evil, people lose faith in the government’s ability to be just. This inhibits the government’s traditional role as solver of collective action problems.

We can actually see this manifest several ways in current North American politics, on both the right and the left.

On the left, there are many people who are justifiably mistrustful of the government, because of its historical or ongoing discrimination against them or people who look like them. This is why the government can credibly lock up white granola-crowd parents for failing to treat their children with medically approved medicines, but can’t when the parents are indigenous. It’s also why many people of colour don’t feel comfortable going to the police when they see or experience violence.

In both cases, historical injustices hamstring the government’s ability to achieve outcomes that it might otherwise be able to achieve if it had more credibly delivered justice in the past.

On the right, I suspect that some amount of skepticism of government comes from legalized abortion. The right is notoriously mistrustful of the government and I wonder if this is because it cannot believe that a government that permits abortion can do anything good. Here this hurts the government’s ability to pursue the sort of redistributive policies that would help the worst off.

In the case of abortion, the very real and pressing need for some women to access it is enough for me to view it as net positive, despite its negative effect on some people’s ability to trust the government to solve coordination problems.

Discrimination causes harms on its own and isn’t even justified on its own “merits”. It’s effect on peoples’ perceptions of justice are just another reason it should be fought against.

In the case of Brazil, we’re faced with an act that is negative (infanticide) with several plausible alternatives (e.g. adoption) that allow the cultural purpose to be served without undermining justice. While the historical record of these types of interventions in indigenous cultures should give us pause, this is counterbalanced by the real harms justice faces as long as infanticide is allowed to continue. Given this, I think the correct and utilitarian thing to do is to support the reformers’ effort to outlaw infanticide.

Quick Fix

May The Fourth Be With You

(The following is the text of the prepared puns I delivered at the 30th Bay Area pun off. If you’re ever in the Bay for one, I really recommend it. They have the nicest crowd in the world.)

First: May the Fourth be with you (“and also with you” is how you respond if like me, you grew up Catholic). As you might be able to tell from this shirt, I am religiously devoted to Star Wars. I know a lot about Star Wars, but I’m more of an orthodox fan- I was all about the Expanded Universe, not this reverend-ing stream of Disney sequels.

Pictured: the outfit I wore

They might be popepular, but it seems like all Disney wants is to turn a prophet – just get big fatwas of cash. They don’t care about Allah the history that happened in the books. Just mo-hamme(r)ed out scripts with flashy set piece battles full of Mecca and characters we med-in-a earlier film.

The EU was mostly books and I loved them despite their ridiculousness. Like, in terms of plots, it’s not clear the writers always card’in-all the books; they often passover normal options and have someone kidnap Han and Leia’s kids.

There were so many convert-sations between the two of them, like “do you noahf ark ‘ids are fine” immediately interrupted by formulaic videos from the kids: “Don’t worry about mi-mam it’s alright, this dude who kidnapped us is a total Luther who just wants to Hindu-s you to vote another way in the Senate”. Eventually they figured out a wafer Leia to communion-cate that the kids needed a bodygod. This led them to Sikh out Winter, who came with the recommendation: “no kidnapper will ever get pastor“.

What else? Luke trains under a clone of Emperor Pulpit-een. Leia is like, “bish, open your eyes, dude’s dark” but Luke justifies it with “well, there’s some things vatican teach me”.
Eventually after Leia asks “how could you Judas to us”, Luke snaps out of it and decides he’s having nun of Palpatine’s evil deeds. He con-vent his anger somewhere else. He comes back to the light side and everyone’s pretty willing to ex-schism for everything he did.
Anyway, I’m really sad that the books aren’t canon anymore. I know there are a lot of ram, a danting number, but I hope I have Eided you in appreciating them.

Data Science, Economics, Falsifiable

The Scale of Inequality

When dealing with questions of inequality, I often get boggled by the sheer size of the numbers. People aren’t very good at intuitively parsing the difference between a million and a billion. Our brains round both to “very large”. I’m actually in a position where I get reminded of this fairly often, as the difference can become stark when programming. Running a program on a million points of data takes scant seconds. Running the same set of operations on a billion data points can take more than an hour. A million seconds is eleven and a half days. A billion seconds 31 years.

Here I would like to try to give a sense of the relative scale of various concepts in inequality. Just how much wealth do the wealthiest people in the world possess compared to the rest? How much of the world’s middle class is concentrated in just a few wealthy nations? How long might it take developing nations to catch up with developed nations? How long before there exists enough wealth in the world that everyone could be rich if we just distributed it more fairly?

According to the Forbes billionaire list, there are (as of the time of writing) 2,208 billionaires in the world, who collectively control $9.1 trillion in wealth (1 trillion seconds ago was the year 29691 BCE, contemporaneous with the oldest cave paintings in Europe). This is 3.25% of the total global wealth of $280 trillion.

The US Federal Budget for 2019 is $4.4 trillion. State governments and local governments each spend another $1.9 trillion. Some $700 billion dollars is given to those governments by the Federal government. With that subtracted, total US government spending is projected to be $7.5 trillion next year.

Therefore, the whole world population of billionaires holds assets equivalent to 1.2 years of US government outlays. Note that US government outlays aren’t equivalent to that money being destroyed. It goes to pay salaries or buy equipment. The comparison here is simply to illustrate how private wealth stacks up against the budgets that governments control.

If we go down by a factor of 1000, there are about 15 million millionaires in the world (according to Wikipedia). Millionaires collectively hold $37.1 trillion (13.25% of all global wealth). All of the wealth that millionaires hold would be enough to fund US government spending for five years.

When we see sensational headlines, like “Richest 1% now owns half the world’s wealth“, we tend to think that we’re talking about millionaires and billionaires. In fact, millionaires and billionaires only own about 16.5% of the world’s wealth (which is still a lot for 0.2% of the world’s population to hold). The rest is owned by less wealthy individuals. The global 1% makes $32,400 a year or more. This is virtually identical to the median American yearly salary. This means that almost fully half of Americans are in the global 1%. Canadians now have a similar median wage, which means a similar number are in the global 1%.

To give a sense of how this distorts the global middle class, I used Povcal.net, the World Bank’s online tool for poverty measurement. I looked for the percentage of a country’s population making between 75% and 125% of the median US income (at purchasing power parity, which takes into account cheaper goods and services in developing countries), equivalent to $64-$107US per day (which is what you get when you divide 75% and 125% of the median US wage by 365 – as far as I can tell, this is the procedure that gives us numbers like $1.25 per day income as the threshold for absolute poverty).

I grabbed what I thought would be an interesting set of countries: The G8, BRICS, The Next 11, Australia, Botswana, Chile, Spain, and Ukraine. These 28 countries had – in the years surveyed – a combined population of 5.3 billion people and had among them the 17 largest economies in the world (in nominal terms). You can see my spreadsheet collecting this data here.

The United States had by far the largest estimated middle class (73 million people), followed by Germany (17 million), Japan (12 million), France (12 million), and the United Kingdom (10 million). Canada came next with 8 million, beating most larger countries, including Brazil, Italy, Korea, Spain, Russia, China, and India. Iran and Mexico have largely similar middle-class sizes, despite Mexico being substantially larger. Botswana ended up having a larger middle class than the Ukraine.

This speaks to a couple of problems when looking at inequality. First, living standards (and therefore class distinctions) are incredibly variable from country to country. A standard of living that is considered middle class in North America might not be the same in Europe or Japan. In fact, I’ve frequently heard it said that the North American middle class (particularly Americans and Canadians) consume more than their equivalents in Europe. Therefore, this should be looked at as a comparison of North American equivalent middle class – who, as I’ve already said, are about 50% encompassed in the global 1%.

Second, we tend to think of countries in Europe as generally wealthier than countries in Africa. This isn’t necessarily true. Botswana’s GDP per capita is actually three times larger than Ukraine’s when unadjusted and more than twice as large at purchasing power parity (which takes into account price differences between countries). It also has a higher GDP per capita than Serbia, Albania, and Moldova (even at purchasing power parity). Botswana, Seychelles, and Gabon have per capita GDPs at purchasing power parity that aren’t dissimilar from those possessed by some less developed European countries.

Botswana, Gabon, and Seychelles have all been distinguished by relatively high rates of growth since decolonization, which has by now made them “middle income” countries. Botswana’s growth has been so powerful and sustained that in my spreadsheet, it has a marginally larger North American equivalent middle class than Nigeria, a country approximately 80 times larger than it.

Of all the listed countries, Canada had the largest middle class as a percent of its population. This no doubt comes partially from using North American middle-class standards (and perhaps also because of the omission of the small, homogenous Nordic countries), although it is also notable that Canada has the highest median income of major countries (although this might be tied with the United States) and the highest 40th percentile income. America dominates income for people in the 60th percentile and above, while Norway comes out ahead for people in the 30th percentile or below.

The total population of the (North American equivalent) middle class in these 28 countries was 170 million, which represents about 3% of their combined population.

There is a staggering difference in consumption between wealthy countries and poor countries, in part driven by the staggering difference in the size of middle (and higher classes) – people with income to spend on things beyond immediate survival. According to Trading Economics, the total disposable income of China is $7.84 trillion (dollars are US). India has $2.53 trillion. Canada, with a population almost 40 times smaller than either, has a total disposable income of $0.96 trillion, while America, with a population about four times smaller than either China or India has a disposable income of $14.79 trillion, larger than China and India put together. If China was as wealthy as Canada, its yearly disposable income would be almost $300 trillion, approximately equivalent to the total amount of wealth in the world.

According to Wikipedia, The Central African Republic has the world’s lowest GDP per capita at purchasing power parity, making it a good candidate for the title of “world’s poorest country”. Using Povcal, I was able to estimate the median wage at $1.33 per day (or $485 US per year). If the Central African Republic grew at the same rate as Botswana did post-independence (approximately 8% year on year) starting in 2008 (the last year for which I had data) and these gains were seen in the median wage, it would take until 2139 for it to attain the same median wage as the US currently enjoys. This of course ignores development aid, which could speed up the process.

All of the wealth currently in the world is equivalent to $36,000 per person (although this is misleading, because much of the world’s wealth is illiquid – it’s in houses and factories and cars). All of the wealth currently on the TSX is equivalent to about $60,000 per Canadian. All of the wealth currently on the NYSE is equivalent to about $65,000 per American. In just corporate shares alone, Canada and the US are almost twice as wealthy as the global average. This doesn’t even get into the cars, houses, and other resources that people own in those countries.

If total global wealth were to grow at the same rate as the market, we might expect to have approximately $1,000,000 per person (not inflation adjusted) sometime between 2066 and 2072, depending on population growth. If we factor in inflation and want there to be approximately $1,000,000 per person in present dollars, it will instead take until sometime between 2102 and 2111.

This assumes too much, of course. But it gives you a sense of how much we have right now and how long it will take to have – as some people incorrectly believe we already do – enough that everyone could (in a fair world) have so much they might never need to work.

This is not of course, to say, that things are fair today. It remains true that the median Canadian or American makes more money every year than 99% of the world, and that the wealth possessed by those median Canadians or Americans and those above them is equivalent to that held by the bottom 50% of the world. Many of us, very many of those reading this perhaps, are the 1%.

That’s the reality of inequality.

Data Science, Economics, Falsifiable

Is Google Putting Money In Your Pocket?

The Cambridge Analytica scandal has put tech companies front and centre. If the thinkpieces along the lines of “are the big tech companies good or bad for society” were coming out any faster, I might have to doubt even Google’s ability to make sense of them all.

This isn’t another one of those thinkpieces. Instead it’s an attempt at an analysis. I want to understand in monetary terms how much one tech company – Google – puts into or takes out of everyone’s pockets. This analysis is going to act as a template for some of the more detailed analyses of inequality I’d like to do later, so if you have a comment about methodology, I’m eager to hear it.

Here’s the basics: Google is a large technology company that primarily makes money off of ad revenues. Since Google is a publicly traded company, statistics are easy to come by. In 2016, Google brought in $89.5 billion in revenue and about 89% of that was from advertising. Advertising is further broken down between advertising on Google sites (e.g. Google Search, Gmail, YouTube, Google Maps, etc.) which account for 80% of advertising revenue and advertising on partner sites, which covers the remainder. The remaining 11% is made up of a variety of smaller projects – selling corporate licenses of its GSuite office software, the Google Play Store, the Google Cloud Computing Platform, and several smaller projects.

There are two ways that we can track how Google’s existence helps or hurts you financially. First, there’s the value of the software it provides. Google’s search has become so important to our daily life that we don’t even notice it anymore – it’s like breathing. Then there’s YouTube, which has more high-quality content than anyone could watch in a lifetime. There’s Google Docs, which are almost a full (free!) replacement for Microsoft Office. There’s Gmail, which is how basically everyone I know does their email. And there’s Android, currently the only viable alternative to iOS. If you had to pay for all of this stuff, how much would you be out?

Second, we can look at how its advertising arm has changed the prices of everything we buy. If Google’s advertising system has driven an increase in spending on advertising (perhaps by starting an arms race in advertising, or by arming marketing managers with graphs, charts and metrics that they can use to trigger increased spending), then we’re all ultimately paying for Google’s software with higher prices elsewhere (we could also be paying with worse products at the same prices, as advertising takes budget that would otherwise be used on quality). On the other hand, if more targeted advertising has led to less advertising overall, then everything will be slightly less expensive (or higher quality) than the counterfactual world in which more was spent on advertising.

Once we add this all up, we’ll have some sort of answer. We’ll know if Google has made us better off, made us poorer, or if it’s been neutral. This doesn’t speak to any social benefits that Google may provide (if they exist – and one should hope they do exist if Google isn’t helping us out financially).

To estimate the value of the software Google provides, we should compare it to the most popular paid alternatives – and look into the existence of any other good free alternatives. Because of this, we can’t really evaluate Search, but because of its existence, let’s agree to break any tie in favour of Google helping us.

On the other hand, Google docs is very easy to compare with other consumer alternatives. Microsoft Office Home Edition costs $109 yearly. Word Perfect (not that anyone uses it anymore) is $259.99 (all prices should be assumed to be in Canadian dollars unless otherwise noted).

Free alternatives exist in the form of OpenOffice and LibreOffice, but both tend to suffer from bugs. Last time I tried to make a presentation in OpenOffice I found it crashed approximately once per slide. I had a similar experience with LibreOffice. I once installed it for a friend who was looking to save money and promptly found myself fixing problems with it whenever I visited his house.

My crude estimate is that I’d expect to spend four hours troubleshooting either free alternative per year. Weighing this time at Ontario’s minimum wage of $14/hour and accepting that the only office suite that anyone under 70 ever actually buys is Microsoft’s offering and we see that Google saves you $109 per year compared to Microsoft and $56 each year compared to using free software.

With respect to email, there are numerous free alternatives to Gmail (like Microsoft’s Hotmail). In addition, many internet service providers bundle free email addresses in with their service. Taking all this into account, Gmail probably doesn’t provide much in the way of direct monetary value to consumers, compared to its competitors.

Google Maps is in a similar position. There are several alternatives that are also free, like Apple Maps, Waze (also owned by Google), Bing Maps, and even the Open Street Map project. Even if you believe that Google Maps provides more value than these alternatives, it’s hard to quantify it. What’s clear is that Google Maps isn’t so far ahead of the pack that there’s no point to using anything else. The prevalence of Google Maps might even be because of user laziness (or anticompetitive behaviour by Google). I’m not confident it’s better than everything else, because I’ve rarely used anything else.

Android is the last Google project worth analyzing and it’s an interesting one. On one hand, it looks like Apple phones tend to cost more than comparable Android phones. On the other hand, Apple is a luxury brand and it’s hard to tell how much of the added price you pay for an iPhone is attributable to that, to differing software, or to differing hardware. Comparing a few recent phones, there’s something like a $50-$200 gap between flagship Android phones and iPhones of the same generation. I’m going to assign a plausible sounding $20 cost saved per phone from using Android, then multiply this by the US Android market share (53%), to get $11 for the average consumer. The error bars are obviously rather large on this calculation.

(There may also be second order effects from increased competition here; the presence of Android could force Apple to develop more features or lower its prices slightly. This is very hard to calculate, so I’m not going to try to.)

When we add this up, we see that Google Docs save anyone who does word processing $50-$100 per year and Android saves the average phone buyer $11 approximately every two years. This means the average person probably sees some slight yearly financial benefit from Google, although I’m not sure the median person does. The median person and the average person do both get some benefit from Google Search, so there’s something in the plus column here, even if it’s hard to quantify.

Now, on to advertising.

I’ve managed to find an assortment of sources that give a view of total advertising spending in the United States over time, as well as changes in the GDP and inflation. I’ve compiled it all in a spreadsheet with the sources listed at the bottom. Don’t just take my word for it – you can see the data yourself. Overlapping this, I’ve found data for Google’s revenue during its meteoric rise – from $19 million in 2001 to $110 billion in 2017.

Google ad revenue represented 0.03% of US advertising spending in 2002. By 2012, a mere 10 years later, it was equivalent to 14.7% of the total. Over that same time, overall advertising spending increased from $237 billion in 2002 to $297 billion in 2012 (2012 is the last date I have data for total advertising spending). Note however that this isn’t a true comparison, because some Google revenue comes from outside of America. I wasn’t able to find revenue broken down in greater depth that this, so I’m using these numbers in an illustrative manner, not an exact manner.

So, does this mean that Google’s growth drove a growth in advertising spending? Probably not. As the economy is normally growing and changing, the absolute amount of advertising spending is less important than advertising spending compared to the rest of the economy. Here we actually see the opposite of what a naïve reading of the numbers would suggest. Advertising spending grew more slowly than economic growth from 2002 to 2012. In 2002, it was 2.3% of the US economy. By 2012, it was 1.9%.

This also isn’t evidence that Google (and other targeted advertising platforms have decreased spending on advertising). Historically, advertising has represented between 1.2% of US GDP (in 1944, with the Second World War dominating the economy) and 3.0% (in 1922, during the “roaring 20s”). Since 1972, the total has been more stable, varying between 1.7% and 2.5%. A Student’s T-test confirms (P-values around 0.35 for 1919-2002 vs. 2003-2012 and 1972-2002 vs. 2003-2012) that there’s no significant difference between post-Google levels of spending and historical levels.

Even if this was lower than historical bounds, it wouldn’t necessarily prove Google (and its ilk) are causing reduced ad spending. It could be that trends would have driven advertising spending even lower, absent Google’s rise. All we can for sure is that Google hasn’t caused an ahistorically large change in advertising rates. In fact, the only thing that is clear in the advertising trends is the peak in the early 1920s that has never been recaptured and a uniquely low dip in the 1940s that seems to have obviously been caused by World War II. For all that people talk about tech disrupting advertising and ad-supported businesses, these current changes are still less drastic than changes we’ve seen in the past.

The change in advertising spending during the years Google is growing could be driven by Google and similar advertising services. But it also could be normal year to year variation, driven by trends similar to what have driven it in the past. If I had a Ph. D. in advertising history, I might be able to tell you what those trends are, but from my present position, all I can say is that the current movement doesn’t seem that weird, from a historical perspective.

In summary, it looks like the expected value for the average person from Google products is close to $0, but leaning towards positive. It’s likely to be positive for you personally if you need a word processor or use Android phones, but the error bounds on advertising mean that it’s hard to tell. Furthermore, we can confidently say that the current disruption in the advertising space is probably less severe than the historical disruption to the field during World War II. There’s also a chance that more targeted advertising has led to less advertising spending (and this does feel more likely than it leading to more spending), but the historical variations in data are large enough that we can’t say for sure.

Literature, Model

Does Amateurish Writing Exist

[Warning: Spoilers for Too Like the Lightning]

What marks writing as amateurish (and whether “amateurish” or “low-brow” works are worthy of awards) has been a topic of contention in the science fiction and fantasy community for the past few years, with the rise of Hugo slates and the various forms of “puppies“.

I’m not talking about the learning works of genuine amateurs. These aren’t stories that use big words for the sake of sounding smart (and at the cost of slowing down the stories), or over the top fanfiction-esque rip-offs of more established works (well, at least not since the Wheel of Time nomination in 2014). I’m talking about that subtler thing, the feeling that bubbles up from the deepest recesses of your brain and says “this story wasn’t written as well as it could be”.

I’ve been thinking about this a lot recently because about ¾ of the way through Too Like The Lightning by Ada Palmer, I started to feel myself put off [1]. And the only explanation I had for this was the word “amateurish” – which popped into my head devoid of any reason. This post is an attempt to unpack what that means (for me) and how I think it has influenced some of the genuine disagreements around rewarding authors in science fiction and fantasy [2]. Your tastes might be calibrated differently and if you disagree with my analysis, I’d like to hear about it.

Now, there are times when you know something is amateurish and that’s okay. No one should be surprised that John Ringo’s Paladin of Shadows series, books that he explicitly wrote for himself are parsed by most people as pretty amateurish. When pieces aren’t written explicitly for the author only, I expect some consideration of the audience. Ideally the writer should be having fun too, but if they’re writing for publication, they have to be writing to an audience. This doesn’t mean that they must write exactly what people tell them they want. People can be a terrible judge of what they want!

This also doesn’t necessarily imply pandering. People like to be challenged. If you look at the most popular books of the last decade on Goodreads, few of them could be described as pandering. I’m familiar with two of the top three books there and both of them kill off a fan favourite character. People understand that life involves struggle. Lois McMaster Bujold – who has won more Hugo awards for best novel than any living author – once said she generated plots by considering “what’s the worst possible thing I can do to these people?” The results of this method speak for themselves.

Meditating on my reaction to books like Paladin of Shadows in light of my experiences with Too Like The Lightning is what led me to believe that the more technically proficient “amateurish” books are those that lose sight of what the audience will enjoy and follow just what the author enjoys. This may involve a character that the author heavily identifies with – the Marty Stu or Mary Sue phenomena – who is lovingly described overcoming obstacles and generally being “awesome” but doesn’t “earn” any of this. It may also involve gratuitous sex, violence, engineering details, gun details, political monologuing (I’m looking at you, Atlas Shrugged), or tangents about constitutional history (this is how most of the fiction I write manages to become unreadable).

I realized this when I was reading Too Like the Lightning. I loved the world building and I found the characters interesting. But (spoilers!) when it turned out that all of the politicians were literally in bed with each other or when the murders the protagonist carried out were described in grisly, unrepentant detail, I found myself liking the book a lot less. This is – I think – what spurred the label amateurish in my head.

I think this is because (in my estimation), there aren’t a lot of people who actually want to read about brutal torture-execution or literally incestuous politics. It’s not (I think) that I’m prudish. It seemed like some of the scenes were written to be deliberately off-putting. And I understand that this might be part of the theme of the work and I understand that these scenes were probably necessary for the author’s creative vision. But they didn’t work for me and they seemed like a thing that wouldn’t work for a lot of people that I know. They were discordant and jarring. They weren’t pulled off as well as they would have had to be to keep me engaged as a reader.

I wonder if a similar process is what caused the changes that the Sad Puppies are now lamenting at the Hugo Awards. To many readers, the sexualized violence or sexual violence that can find its way into science fiction and fantasy books (I’d like to again mention Paladin of Shadows) is incredibly off-putting. I find it incredibly off-putting. Books that incorporate a lot of this feel like they’re ignoring the chunk of audience that is me and my friends and it’s hard while reading them for me not to feel that the writers are fairly amateurish. I normally prefer works that meditate on the causes and uses of violence when they incorporate it – I’d put N.K. Jemisin’s truly excellent Broken Earth series in this category – and it seems like readers who think this way are starting to dominate the Hugos.

For the people who previously had their choices picked year after year, this (as well as all the thinkpieces explaining why their favourite books are garbage) feels like an attack. Add to this the fact that some of the books that started winning had a more literary bent and you have some fans of the genre believing that the Hugos are going to amateurs who are just cruising to victory by alluding to famous literary works. These readers look suspiciously on crowds who tell them they’re terrible if they don’t like books that are less focused on the action and excitement they normally read for. I can see why that’s a hard sell, even though I’ve thoroughly enjoyed the last few Hugo winners [3].

There’s obviously an inferential gap here, if everyone can feel angry about the crappy writing everyone else likes. For my part, I’ll probably be using “amateurish” only to describe books that are technically deficient. For books that are genuinely well written but seem to focus more on what the author wants than (on what I think) their likely audience wants, well, I won’t have a snappy term, I’ll just have to explain it like that.

Footnotes

[1] A disclaimer: the work of a critic is always easier than that of a creator. I’m going to be criticizing writing that’s better than my own here, which is always a risk. Think of me not as someone criticizing from on high, but frantically taking notes right before a test I hope to barely pass. ^

[2] I want to separate the Sad Puppies, who I view as people sad that action-packed books were being passed over in favour of more literary ones from the Rabid Puppies, who just wanted to burn everything to the ground. I’m not going to make any excuses for the Rabid Puppies. ^

[3] As much as I can find some science fiction and fantasy too full of violence for my tastes, I’ve also had little to complain about in the past, because my favourite author, Lois McMaster Bujold, has been reliably winning Hugo awards since before I was born. I’m not sure why there was never a backlash around her books. Perhaps it’s because they’re still reliably space opera, so class distinctions around how “literary” a work is don’t come up when Bujold wins. ^

Falsifiable, Physics, Politics

The (Nuclear) International Monitoring System

Under the Partial Test Ban Treaty (PTBT), all nuclear tests except for those underground are banned. Under the Non-Proliferation Treaty (NPT), only the permanent members of the UN Security Council are legally allowed to possess nuclear weapons. Given the public outcry over fallout that led to the PTBT and the worries over widespread nuclear proliferation that led to the NPT, it’s clear that we require something beyond pinky promises to verify that countries are meeting the terms of these treaties.

But how do we do so? How can you tell when a country tests an atomic bomb? How can you tell who did it? And how can one differentiate a bomb on the surface from a bomb in the atmosphere from a bomb in space from a bomb underwater from a bomb underground?

I’m going to focus on two efforts to monitor nuclear weapons: the national security apparatus of the United States and the Comprehensive Test Ban Treaty Organization (CTBTO) Preparatory Commission’s International Monitoring System (IMS). Monitoring falls into five categories: Atmospheric Radionuclide Monitoring, Seismic Monitoring, Space-based Monitoring, Hydroacoustic Monitoring, and Infrasound Monitoring.

Atmospheric Radionuclide Monitoring

Nuclear explosions generate radionuclides, either by dispersing unreacted fuel, as direct products of fission, or by interactions between neutrons and particles in the air or ground. These radionuclides are widely dispersed from any surface testing, while only a few fission products (mainly various radionuclides of the noble gas xenon) can escape from properly conducted underground tests.

For the purposes of minimizing fallout, underground tests are obviously preferred. But because they only emit small amounts of one particular radionuclide, they are much harder for radionuclide monitoring to detect.

Detecting physical particles is relatively easy. There are 80 IMS stations scattered around the world. Each is equipped with an air intake and a filter. Every day, the filter is changed and then prepared for analysis. Analysis involves waiting a day (for irrelevant radionuclides to decay), then reading decay events from the filter for a further day. This gives scientists an idea of what radioactive elements are present.

Any deviations from the baseline at a certain station can be indicative of a nuclear weapon test, a nuclear accident, or changing wind patterns bringing known radionuclides (e.g. from a commercial reactor) to a station where they normally aren’t present. Wind analysis and cross validation with other methods are used to corroborate any suspicious events.

Half of the IMS stations are set up to do the more difficult xenon monitoring. Here air is pumped through a material with a reasonably high affinity for xenon. Apparently activated charcoal will work, but more sophisticated alternatives are being developed. The material is then induced to release the xenon (with activated charcoal, this is accomplished via heating). This process is repeated several times, with the output of each step pumped to a fresh piece of activated charcoal. Multiple cycles ensure that only relatively pure xenon get through to analysis.

Once xenon is collected, isotope analysis must be done to determine which (if any) radionuclides of xenon are present. This is accomplished either by comparing the beta decay of the captured xenon with its gamma decay, or looking directly at gamma decay with very precise gamma ray measuring devices. Each isotope of xenon has a unique half-life (which affects the frequency with which it omits beta- and gamma-rays) and a unique method of decay (which determines if the decay products are primarily alpha-, beta-, or gamma-rays). Comparing the observed decay events to these “fingerprints” allows for the relative abundance of xenon nuclides to be estimated.

There are some background xenon radionuclides from nuclear reactors and even more from medical isotope production (where we create unstable nuclides in nuclear reactors for use in medical procedures). Looking at global background data you can see the medical isotope production in Ontario, Europe, Argentina, Australia and South Africa. I wonder if this background effect makes world powers cautious about new medical isotope production facilities in countries that are at risk of pursuing nuclear weapons. Could Iran’s planned medical isotope complex have been used to mask nuclear tests?

Not content merely to host several monitoring stations and be party to the data of the whole global network of IMS stations, the United States also has the WC-135 “Constant Phoenix” plane, a Boeing C-135 equipped with mobile versions of particulate and xenon detectors. The two WC-135s can be scrambled anywhere a nuclear explosion is suspected to look for evidence. A WC-135 gave us the first confirmation that the blast from the 2006 North Korean nuclear test was indeed nuclear, several days before the IMS station in Yellowknife, Canada confirmed a spike in radioactive xenon and wind modelling pinpointed the probable location as inside North Korea.

Seismic Monitoring

Given that fewer monitoring stations are equipped with xenon radionuclide detectors and that the background “noise” from isotope production can make radioactive xenon from nuclear tests hard to positively identify, it might seem like nuclear tests are easy to hide underground.

That isn’t the case.

A global network of seismometers ensures that any underground nuclear explosion is promptly detected. These are the same seismometers that organizations like the USGS (United States Geological Survey) use to detect and pinpoint earthquakes. In fact, the USGS provides some of the 120 auxiliary stations that the CTBTO can call on to supplement its fifty seismic monitoring stations.

Seismometers are always on, looking for seismic disturbances. Substantial underground nuclear tests produce shockwaves that are well within the detection limit of modern seismometers. The sub-kiloton North Korean nuclear test in 2006 appears to have been registered as equivalent to a magnitude 4.1 earthquake. A quick survey of ongoing earthquakes should probably show you dozens that have been detected that are less powerful than even that small North Korean test.

This probably leads you to the same question I found myself asking, namely: “if earthquakes are so common and these detectors are so sensitive, how can they ever tell nuclear detonations from earthquakes?”

It turns out that underground nuclear explosions might rattle seismometers like earthquakes do, but they do so with characteristics very different from most earthquakes.

First, the waveform is different. Imagine you’re holding a slinky and a friend is holding the other end. There are two mains ways you can create waves. The first is by shaking it from side to side or up and down. Either way, there’s a perspective from which these waves will look like the letter “s”.

The second type of wave can be made by moving your arm forward and backwards, like you’re throwing and catching a ball. These waves will cause moving regions where the slinky is bunched more tightly together and other regions where it is more loosely packed.

These are analogous to the two main types of body waves in seismology. The first (the s-shaped one) is called an S-wave (although the “S” here stands for “shear” or “secondary” and only indicates the shape by coincidence), while the second is called a P-wave (for “pressure” or “primary”).

I couldn’t find a good free version of this, so I had to make it myself. Licensed (like everything I create for my blog) CC-BY-NC-SA v4.0.

 

Earthquakes normally have a mix of P-waves and S-waves, as well as surface waves created by interference between the two. This is because earthquakes are caused by slipping tectonic plates. This slipping gives some lateral motion to the resulting waves. Nuclear explosions lack this side to side motion. The single, sharp impact from them on the surrounding rocks is equivalent to the wave you’d get if you thrust your arm forward while holding a slinky. It’s almost all P-wave and almost no S-wave. This is very distinctive against a background of earthquakes. The CTBTO is kind enough to show what this difference looks like; in this image, the top event is a nuclear test and the bottom event is an earthquake of a similar magnitude in a similar location (I apologize for making you click through to see the image, but I don’t host copyrighted images here).

There’s one further way that the waves from nuclear explosions stand out. They’re caused by a single point source, rather than kilometers of rock. This means that when many seismic stations work together to find the cause of a particular wave, they’re actually able to pinpoint the source of any explosion, rather than finding a broad front like they would for an earthquake.

The fifty IMS stations automatically provide a continuous stream of data to the CTBTO, which sifts through this data for any events that are overwhelmingly P-Waves and have a point source. Further confirmation then comes from the 120 auxiliary stations, which provide data on request. Various national and university seismometer programs get in on this too (probably because it’s good for public relations and therefore helps to justify their budgets), which is why it’s not uncommon to see several estimates of yield soon after seismographs pick up on nuclear tests.

Space Based Monitoring

This is the only type of monitoring that isn’t done by the CTBTO Preparatory Commission, which means that it is handled by state actors – whose interests necessarily veer more towards intelligence gathering than monitoring treaty obligations per se.

The United States began its space based monitoring program in response to the Limited Test Ban Treaty, which left verification explicitly to the major parties involved. The CTBTO Preparatory Commission was actually formed in response to a different treaty, the Comprehensive Test Ban Treaty, which is not fully in force yet (hence why the organization ensuring compliance with it is called the “Preparatory Commission”).

The United States first fulfilled its verification obligations with the Vela satellites, which were equipped with gamma-ray detectors, x-ray detectors, electromagnetic pulse detectors (which can detect the electro-magnetic pulse from high-altitude nuclear detonations) and an optical sensor called a bhangmeter.

Bhangmeters (the name is a reference to a strain of marijuana, with the implied subtext that you’d have to be high to believe they would work) are composed of a photodiode (a device that produces current when illuminated), a timer, and some filtering components. Bhangmeters are set up to look for the distinctive nuclear “double flash“, caused when the air compressed in a nuclear blast briefly obscuring the central fireball.

The bigger a nuclear explosion, the larger the compression and the longer the central fireball is obscured. The timer picks up on this, estimating nuclear yield from the delay between the initial light and its return.

The bhangmeter works because very few natural (or human) phenomena produce flashes that are as bright or distinctive as nuclear detonations. A properly calibrated bhangmeter will filter out continuous phenomena like lightning (or will find them too faint to detect). Other very bright events, like comets breaking up in the upper atmosphere, only provide a single flash.

There’s only been one possible false positive since the bhangmeters went live in 1967; a double flash was detected in the Southern Indian Ocean, but repeated sorties by the WC-135s detected no radionuclides. The event has never been conclusively proved to be nuclear or non-nuclear in origin and remains one of the great unsolved mysteries of age of widespread atomic testing.

By the time of this (possible) false positive, the bhangmeters had also detected 41 genuine nuclear tests.

The Vela satellites are no longer in service, but the key technology they carried (bhangmeters, x-ray detectors, and EMP detectors) lives on in the US GPS satellite constellation, which does double duty as its space-based nuclear sentinels.

One last note of historical errata: when looking into unexplained gamma-ray readings produced by the Vela satellites, US scientists discovered gamma-ray bursts, an energetic astronomical phenomenon associated with supernovas and merging binary stars.

Hydroacoustic Monitoring

Undersea explosions don’t have a double flash, because steam and turbulence quickly obscure the central fireball and don’t clear until well after the fireball has subsided. It’s true that radionuclide detection should eventually turn up evidence of any undersea nuclear tests, but it’s still useful to have a more immediate detection mechanism. That’s where hydroacoustic monitoring comes in.

There are actually two types of hydroacoustic monitoring. There’s six stations that use true underwater monitoring with triplets of hydrophones (so that signal direction can be determined via triangulation) which are very sensitive, but also very expensive (as hydrophones must be installed at a depth of approximately one kilometer, where sound transmission is best). There’s also five land based stations, which use seismographs on steeply sloped islands to detect the seismic waves underwater sounds make when they hit land. Land based monitoring is less accurate, but requires little in the way of specialized hardware, making it much cheaper.

In either case, data is streamed directly to CTBTO headquarters in Vienna, where it is analyzed and forwarded to states that are party to the CTB. At the CTBTO, the signal is split into different channels based on a known library of undersea sounds and explosions are  separated from natural phenomena (like volcanos, tsunamis, and whales) and man-made noises (like gas exploration, commercial shipping, and military drills). Signal processing and analysis – especially of hydrophone data – is a very mature field, so the CTBTO doesn’t lacks for techniques to refine its estimates of events.

Infrasound Monitoring

Infrasound monitoring stations are the last part of the global monitoring system and represent the best way for the CTBTO (rather than national governments with the resources to launch satellites) to detect atmospheric nuclear tests. Infrasound stations try to pick up the very low frequency sound waves created by nuclear explosions – and a host of other things, like volcanos, planes, and mining.

A key consideration with infrasound stations is reducing background noise. For this, being far away from human habitation and blocked from the wind is ideal. Whenever this cannot be accomplished (e.g. there’s very little cover from the wind in Antarctica, where several of the sixty stations are), more infrasound arrays are needed.

The components of the infrasound arrays look very weird.

Specifically, they look like a bunker that tried to eat four Ferris wheels. Each array actually contains three to eight of these monstrosities. From the CTBTO via Wikimedia Commons.

 

 

What you see here are a bunch of pipes that all feed through to a central microbarometer, which is what actually measures the infrasound by detecting slight changes in air pressure. This setup filters out a lot of the wind noise and mostly just lets infrasound through.

Like the hydroacoustic monitoring system, data is sent to the CTBTO in real time and analyzed there, presumably drawing on a similar library of recorded nuclear test detonations and employing many of the same signal processing techniques.

Ongoing research into wind noise reduction might eventually make the whole set of stations much more sensitive than it is now. Still, even the current iteration of infrasound monitoring should be enough to detect any nuclear tests in the lower atmosphere.


The CTBTO has a truly great website that really helped me put together this blog post. They provide a basic overview of the four international monitoring systems I described here (they don’t cover space-based monitoring because it’s outside of their remit), as well as pictures, a glossary, and a primer on the analysis they do. If you’d like to read more about how the international monitoring system works and how it came into being, I recommend visiting their website.

This post, like many of the posts in my nuclear weapon series came about because someone asked me a question about nuclear weapons and I found I couldn’t answer quite as authoritatively as I would have liked. Consequently, I’d like to thank Cody Wild and Tessa Alexanian for giving me the impetus to write this.

This post is part of a series on special topics in nuclear weapons. The index for all of my writing on nuclear weapons can be found here. Previous special topics posts include laser enrichment and the North Korean nuclear program.

History, Quick Fix

Against Historical Narratives

There is perhaps no temptation greater to the amateur (or professional) historian than to take a set of historical facts and draw from them a grand narrative. This tradition has existed at least since Gibbon wrote The History of the Decline and Fall of the Roman Empire, with its focus on declining civic virtue and the rise of Christianity.

Obviously, it is true that things in history happen for a reason. But I think the case is much less clear that these reasons can be marshalled like soldiers and made to march in neat lines across the centuries. What is true in one time and place may not necessarily be true in another. When you fall under the sway of a grand narrative, when you believe that everything happens for a reason, you may become tempted to ignore all of the evidence to the contrary.

Instead praying at the altar of grand narratives, I’d like to suggest that you embrace the ambiguity of history, an ambiguity that exists because…

Context Is Tricky

Here are six sentences someone could tell you about their interaction with the sharing economy:

  • I stayed at an Uber last night
  • I took an AirBnB to the mall
  • I deliberately took an Uber
  • I deliberately took a Lyft
  • I deliberately took a taxi
  • I can’t remember which ride-hailing app I used

Each of these sentences has an overt meaning. They describe how someone spent a night or got from place A to place B. They also have a deeper meaning, a meaning that only makes sense in the current context. Imagine your friend told you that they deliberately took an Uber. What does it say about them that they deliberately took a ride in the most embattled and controversial ridesharing platform? How would you expect their political views to differ from someone who told you they deliberately took a taxi?

Even simple statements carry a lot of hidden context, context that is necessary for full understanding.

Do you know what the equivalent statements to the six I listed would be in China? How about in Saudi Arabia? I can tell you that I don’t know either. Of course, it isn’t particularly hard to find these out for China (or Saudi Arabia). You may not find a key written down anywhere (especially if you can only read English), but all you have to do is ask someone from either country and they could quickly give you a set of contextual equivalents.

Luckily historians can do the same… oh. Oh damn.

When you’re dealing with the history of a civilization that “ended” hundreds or thousands of years ago, you’re going to be dealing with cultural context that you don’t fully understand. Sometimes people are helpful enough to write down “Uber=kind of evil” and “supporting taxis = very left wing, probably vegan & goes to protests”. A lot of the time they don’t though, because that’s all obvious cultural context that anyone they’re writing to would obviously have.

And sometimes they do write down even the obvious stuff, only for it all to get burned when barbarians sack their city, leaving us with no real way to understand if a sentence like “the opposing orator wore red” has any sort of meaning beyond a statement of sartorial critique or not.

All of this is to say that context can make or break narratives. Look at the play “Hamilton”. It’s a play aimed at urban progressives. The titular character’s strong anti-slavery views are supposed to code to a modern audience that he’s on the same political team as them. But if you look at American history, it turns out that support for abolishing slavery (and later, abolishing segregation) and support for big corporations over the “little guy” were correlated until very recently. In the 1960s though 1990s, there was a shift such that the Democrats came to stand for both civil rights and supporting poorer Americans, instead of just the latter. Before this shift, Democrats were the party of segregation, not that you’d know it to see them today.

Trying to tie Hamilton into a grander narrative of (eventual) progressive triumph erases the fact that most of the modern audience would strenuously disagree with his economic views (aside from urban neo-liberals, who are very much in Hamilton’s mold). Audiences end up leaving the paly with a story about their own intellectual lineage that is far from correct, a story that may cause them to feel smugly superior to people of other political stripes.

History optimized for this sort of team or political effect turns many modern historians or history writers into…

Unreliable Narrators

Gaps in context, or modern readers missing the true significance of gestures, words, and acts steeped in a particular extinct culture, combined with the fact that it is often impossible to really know why someone in the past did something mean that some of history is always going to be filled in with our best guesses.

Professor Mary Beard really drove this point home for me in her book SPQR. She showed me how history that I thought was solid was often made up of myths, exaggerations, and wishful thinking on the parts of modern authors. We know much less about Rome than many historians had made clear to me, probably because any nuance or alternative explanation would ruin their grand theories.

When it comes to so much of the past, we genuinely don’t know why things happened.

I recently heard two colleagues arguing about The Great Divergence – the unexplained difference in growth rates between Europe and the rest of the world that became apparent in the 1700s and 1800s. One was very confident that it could be explained by access to coal. The other was just as confident that it could be explained by differences in property rights.

I waded in and pointed out that Wikipedia lists fifteen possible explanations, all of which or none of which could be true. Confidence about the cause of the great divergence seems to me a very silly thing. We cannot reproduce it, so all theories must be definitionally unfalsifiable.

But both of my colleagues had read narrative accounts of history. And these narrative accounts had agendas. One wished to show that all peoples had the same inherent abilities and so cast The Great Divergence as chance. The other wanted to show how important property rights are and so made those the central factor in it. Neither gave much time to the other explanation, or any of the thirteen others that a well trafficked and heavily edited Wikipedia article finds equally credible.

Neither agenda was bad here. I am in fact broadly in favour of both. Yet their effect was to give two otherwise intelligent and well-read people a myopic view of history.

So much of narrative history is like this! Authors take the possibilities they like best, or that support their political beliefs the best, or think will sell the best, and write them down as if they are the only possibilities. Anyone who is unlucky enough to read such an account will be left with a false sense of certainty – and in ignorance of all the other options.


Of course, I have an agenda too. We all do. It’s just that my agenda is literally “the truth resists simplicity“. I like the messiness of history. It fits my aesthetic sense well. It’s because of this sense, that I’d like to encourage everyone to make their next foray into history free of narratives. Use Wikipedia or a textbook instead of a bestselling book. Read something by Mary Beard, who writes as much about historiography as she writes about history. Whatever you do, avoid books with blurbs praising the author for their “controversial” or “insightful” new theory.

Leave, just once, behind those famous narrative works like “Guns, Germs, and Steel” or “The History of the Decline and Fall of the Roman Empire” and pick up something that embraces ambiguity and doesn’t bury messiness behind a simple agenda.

Economics, Politics

When To Worry About Public Debt

I watch a lot of political debates with my friends. A couple of them have turned to me after watching heated arguments about public debt and (because I have a well-known habit of reading monetary policy blogs) asked me who is right. I hear questions like:

Is it true that public debt represents an unfair burden on our hypothetical grandchildren? Is all this talk about fiscal discipline and balanced budgets pointless? Is it really bad when public debt gets over 100% of a country’s GDP? How can the threat of defaulting on loans lead to inflation and ruin?

And what does all this mean for Ontario? Is Doug Ford right about the deficit?

This is my attempt to sort this all out in a public and durable form. Now when I’ve taken a political debate drinking game too far, I’ll still be able to point people towards the answers to their questions.

(Disclaimer: I’m not an economist. Despite the research I did for it and the care with which I edited, this post may contain errors, oversimplifications, or misunderstandings.)

Is Public Debt A Burden On Future Generations?

Among politicians of a certain stripe, it’s common to compare the budget of a country to the budget of a family. When a family is budgeting, any shortfall must be paid for via loans. Left unspoken is the fact that many families find themselves in a rather large amount of debt early on – because they need a mortgage to buy their dwelling. The only way a family can ever get out of debt is by maintaining a monthly surplus until their mortgage is paid off, then being careful to avoid taking on too much new debt.

Becoming debt free is desirable to individuals for two reasons. First, it makes their retirement (feel) much more secure. Given that retirement generally means switching to a fixed income or living off savings, it can be impossible to pay off the principle of a debt after someone makes the decision to retire.

Second, parents often desire to leave something behind for their children. This is only possible if their assets outweigh their debts.

Countries have to grapple with neither of these responsibilities. While it is true that the average age in many countries is steadily increasing, countries that have relatively open immigration policies and are attractive to immigrants largely avoid this problem. Look at how Canada and the United States compare to Italy and Japan in working age population percentage, for example.

Graph showing % of working age population in 4 OECD countries: Japan, Canada, USA, Italy.
After seeing this graph, I realized how hyperbolic it was to talk about Japan’s aging population. Source: OECD.

 

Even in Japan, where this is “dire”, the percentage of the population that is working age is equivalent to the percentage of the population that was working age in Canada or America in 1970. As lifespans increase, we may have to expand our definition of working age. But some combination of immigration, better support for parents, and better support for older citizens who wish to keep working will prevent us from ever getting to a point where it’s sensible to talk about a country “retiring”.

Since countries don’t “retire”, they don’t have to cope with the worry of “needing to work later to pay off that debt”. Since countries don’t have children, they don’t have to worry about having something to pass on. Countries don’t ever actually have to pay back all of their debt. They can continue to roll it over indefinitely, as long as someone is willing to continue to loan them money at a rate they’re willing to pay.

What I mean by “rolling over”, is that countries can just get a new loan for the same amount as their last one, as soon as the previous loan comes due. If interest rates have risen (either in general, or because the country is a greater risk) since their last loan, the new loan will be more expensive. If they’ve fallen, it will be cheaper. Rolling over loans changes the interest rate a country is paying, but doesn’t change the amount it owes.

Is Talk Of Discipline Pointless?

No.

Even if countries don’t really ever have to pay back the principle on their loans, they do have to make interest payments (borrowing to pay these is possible, but it isn’t a good look and can pretty quickly lead to dangerous levels of debt). The effect of these payments ranges from “it’s mildly annoying that we can’t spend that money on something better” to “we’re destroying our ecosystem growing bananas so that we have something to sell for cash to make our interest payments”. Lack of discipline and excessive debt levels can move a country closer to the second case.

In a well-integrated and otherwise successful economy with ample room in its governmental budget, interest payments are well worth the advantage of getting money early. When this money is used to create economic benefits that accrue faster than the interest payments, countries are net beneficiaries. If you take out a loan that charges 1-2% interest a year and use it to build a bridge that drives 4% economic growth for the next forty years, you’re ahead by 2-3% year on year. This is a good deal.

Unlike most talk about interest rates, where they’re entirely hypothetical, I really do mean that 1-2% figure. That’s actually higher than the average rate the US government has been paying to borrow over the last decade (Germany had it even better; they briefly paid negative interest rates). Governments – at least those with a relatively good track record around money – really have a superpower with how cheaply they can get money, so if nothing else, it’s worth keeping debt relatively low so that they don’t lose their reputation for responsibility and continue to have access to cheap money for when they really need it.

That’s the case in a moderately disciplined developed nation with adequate foreign reserves, at least. In a cash-poor or underdeveloped economy where a decent portion of any loan is lost to cronyism and waste, the case for loans being positive is much more… mixed. For these countries, discipline means “taking no loans at all”.

When discipline falls apart and debt levels rise too high, very bad things start to happen.

Is 100% of GDP The Line Beyond Which Debt Shouldn’t Rise?

There is nothing special about 100% of GDP, except that people think it is special.

Sometimes, people talk about markets like they’re these big impersonal systems that have no human input. This feels true because the scale of the global financial system is such that from the perspective of pretty much any individual person, they’re impersonal and impossible to really influence. But ultimately, other than a few high frequency trading platforms, all decisions in a market have to be made by humans.

Humans have decided that in certain cases, it’s bad when a country has more than 100% of its GDP in debt. This means that it becomes much more expensive to get new loans (and because of the constant rollover, even old loans eventually become new loans) when a country crosses this Rubicon, which in turn makes them much more likely to default. There’s some element of self-fulfilling prophecy here!

(Obviously there does have to be some point where a country really is at risk from its debt load and obviously this needs to be scaled to country size and wealth to not be useless. I think people have chosen 100% of GDP more because it’s a nice round number and it’s simple to calculate, not because it has particularly great inherent predictive power, absent the power it has as a self-fulfilling prophecy. Maybe the “objectively correct” number is in fact 132.7% of the value of all exports, or 198% of 5-year average government revenues… In either case, we’ve kind of lost our chance; any number calculated now would be heavily biased by the crisis of confidence that can happen when debt reaches 100% of GDP.)

That said, comparing a country’s debt load to its GDP without making adjustments is a recipe for confusion. While Everyone was fretting about Greece having ~125% of its GDP in debt, Japan was carrying 238% of its GDP in debt.

There are two reasons that Japan’s debt is much less worrying than Greece’s.

First, there’s the issue of who’s holding that debt. A very large portion of Japanese debt is held by its own central bank. By my calculations (based off the most recent BOJ numbers), the Bank of Japan is holding approximately 44% of the Japanese government’s debt. Given that the Bank of Japan is an organ of the Japanese Government (albeit an arm’s length one), this debt is kind of owed by the government of Japan, to the government of Japan. When 44% of every loan payment might ultimately find its way back to you, your loan payments become less scary.

Second, there’s the issue of denomination. Greek public debts are denominated in Euros, a currency that Greece doesn’t control. If Greece wants €100, it must collect €100 in taxes from its citizens. Greece cannot just create Euros.

Japanese debt is denominated in Yen. Because Japan controls the yen, it has two options for repaying ¥100 of debt. It can collect ¥100 in taxes – representing ¥100 worth of valuable work. Or it can print ¥100. There are obvious consequences to printing money, namely inflation. But given that Japan has struggled with chronic deflation and has consistently underperformed the inflation targets economists think it needs to meet, it’s clear that a bit of inflation isn’t the worst thing that could happen to it.

When evaluating whether a debt burden is a problem, you should always consider the denomination of the debt, who the debtholders are, and how much inflation a country can tolerate. It is always worse to hold debt in a denomination that you don’t control. It’s always worse to owe money to people who aren’t you (especially people more powerful than you), and it’s always easier to answer debt with inflation when your economy needs more inflation anyways.

This also suggests that government debt is much more troubling when it’s held by a sub-national institution than by a national institution (with the exception of Europe, where even nations don’t individually control the currency). In this case, monetary policy options are normally off the table and there’s normally someone who’s able to force you to pay your debt, no matter what that does to your region.

Developing countries very rarely issue debt in their own currency, mainly because no one is interested in buying it. This, combined with low foreign cash reserves puts them at a much higher risk of failing to make scheduled debt payments – i.e. experiencing an actual default.

What Happens If A Country Defaults?

No two defaults are exactly alike, so the consequences vary. That said, there do tend to be two common features: austerity and inflation.

Austerity happens for a variety of reasons. Perhaps spending levels were predicated on access to credit. Without that access, they can’t be maintained. Or perhaps a higher body mandated it; see for example Germany (well, officially, the EU) mandating austerity in Greece, or Michigan mandating austerity in Detroit.

Inflation also occurs for a variety of reasons. Perhaps the government tries to fill a budgetary shortfall and avoid austerity by printing bills. This flood of money bids up prices, ruins savings and causes real wages to decline. Perhaps it becomes hard to convince foreigners to accept the local currency in exchange for goods, so anything imported becomes very expensive. When many goods are imported, this can lead to very rapid inflation. Perhaps people in general lose faith in money (and so it becomes nearly worthless), maybe in conjunction with the debt crisis expanding to the financial sector and banks subsequently failing. Most likely, it will be some combination of these three, as well as others I haven’t thought to mention.

During a default, it’s common to see standards of living plummet, life savings disappear, currency flight into foreign denominations, promptly followed by currency controls, which prohibit sending cash outside of the country. Currency controls make leaving the country virtually impossible and make any necessary imports a bureaucratic headache. This is fine when the imports in question are water slides, but very bad when they’re chemotherapy drugs or rice.

On the kind of bright side, defaults also tend to lead to mass unemployment, which gives countries experiencing them comparative advantage in any person intensive industry. Commonly people would say “wages are low, so manufacturing moves there”, but that isn’t quite how international trade works. It’s not so much low wages that basic manufacturing jobs go in search of, but a workforce that can’t do anything more productive and less labour intensive. This looks the same, but has the correlation flipped. In either case, this influx of manufacturing jobs can contain within it the seed of later recovery.

If a country has sound economic management (like Argentina did in 2001), a default isn’t the end of the world. It can negotiate a “haircut” of its loans, giving its creditors something less than the full amount, but more than nothing. It might even be able to borrow again in a few years, although the rates that it will have to offer will start out in credit card territory and only slowly recover towards auto-loan territory.

When these trends aren’t managed by competent leadership, or when the same leaders (or leadership culture) that got a country into a mess are allowed to continue, the recovery tends to be moribund and the crises continual. See, for example, how Greece has limped along, never really recovering over the past decade.

Where Does Ontario Fit In?

My own home province of Ontario is currently in the midst of an election and one candidate, Doug Ford, has made the ballooning public debt the centrepiece of his campaign. Evaluating his claims gives us a practical example of how to evaluate claims of this sort in general.

First, Ontario doesn’t control the currency that its debt is issued in, which is an immediate risk factor for serious debt problems. Ontario also isn’t dominant enough within Canada to dictate monetary policy to the Federal Government. Inflation for the sake of saving Ontario would doom any sitting Federal government in every other province, so we can’t expect any help from the central bank.

Debt relief from the Federal government is possible, but it couldn’t come without hooks attached. We’d definitely lose some of our budgetary authority, certainly face austerity, and even then, it might be too politically unpalatable to the rest of the country.

However, the sky is not currently falling. While debt rating services have lost some confidence in our willingness, if not our ability to get spending under control and our borrowing costs have consequently risen, we’re not yet into a vicious downwards spiral. Our debt is at a not actively unhealthy 39% of the GDP and the interest rate is a non-usurious 4%.

That said, it’s increased more quickly than the economy has grown over the past decade. Another decade going on like we currently are certainly would put us at risk of a vicious cycle of increased interest rates and crippling debt.

Doug Ford’s emotional appeals about mortgaging our grandchildren’s future are exaggerated and false. I’ve already explained how countries don’t work like families. But there is a more pragmatic concern here. If we don’t control our spending now, on our terms, someone else – be it lenders in a default or the federal government in a bailout – will do it for us.

Imagine the courts forcing Ontario to service its debt before paying for social services and schools. Imagine the debt eating up a full quarter of the budget, with costs rising every time a loan is rolled over. Imagine our public services cut to the bone and our government paralyzed without workers. Things would get bad and the people who most need a helping hand from the government would be hit the hardest.

I plan to take this threat seriously and vote for a party with a credible plan to balance our budget in the short term.

If one even exists. Contrary to his protestations, Doug Ford isn’t leading a party committed to reducing the deficit. He’s publically pledged himself to scrapping the carbon tax. Absent it, but present the rest of his platform, the deficit spending is going to continue (during a period of sustained growth, no less!). Doug Ford is either lying about what he’s going to cut, or he’s lying about ending the debt. That’s not a gamble I particularly want to play.

I do hope that someone campaigns on a fully costed plan to restore fiscal order to Ontario. Because we are currently on the path to looking a lot like Greece.

Model, Politics, Quick Fix

The Awkward Dynamics of the Conservative Leadership Debates

Tanya Granic Allen is the most idealistic candidate I’ve ever seen take the stage in a Canadian political debate. This presents some awkward challenges for the candidates facing her, especially Mulroney and Elliot.

First, there’s the simple fact of her idealism. I think Granic Allen genuinely believes everything she says. For her, knowing what’s right and what’s wrong is simple. There isn’t a whole lot of grey. She even (bless her) probably believes that this will be an advantage come election time. People overwhelming don’t like the equivocation of politicians, so Granic Allen must assume her unequivocal moral stances will be a welcome change

For many people, it must be. Even for those who find it grating, it seems almost vulgar to attack her. It’s clear that she isn’t in this for herself and doesn’t really care about personal power. Whether she could maintain that innocence in the face of the very real need to make political compromises remains an open question, but for now she does represent a certain vein of ideological conservatism in a form that is unsullied by concerns around electability.

The problem here is that the stuff Granic Allen is pushing – “conscience rights” and “parental choice” – is exactly the sort of thing that can mobilize opposition to the PC party. Fighting against sex-ed and abortion might play well with the base, but Elliot and Mulroney know that unbridled social conservatism is one of the few things that can force the province’s small-l liberals to hold their noses and vote for the big-L Liberal Party. In an election where we can expect embarrassingly low turnout (it was 52% in 2014), this can play a major role.

A less idealistic candidate would temper themselves to help the party in the election. Granic Allen has no interest in doing this, which basically forces the pragmatists to navigate the tricky act of distancing themselves from her popular (with the base) proposals so that they might carry the general election.

Second, there’s the difficult interaction between the anti-rational and anti-empirical “common sense” conservatism pushed by Granic Allen and Ford and the pragmatic, informed conservatism of Elliot and Mulroney.

For Ford and Granic Allen, there’s a moral nature to truth. They live in a just world where something being good is enough to make it true. Mulroney and Elliot know that reality has an anti-partisan bias.

Take clean energy contracts. Elliot quite correctly pointed out that ripping up contracts willy-nilly will lead to a terrible business climate in Ontario. This is the sort of suggestion we normally see from the hard left (and have seen in practice in places the hard left idolizes, like Venezuela). But Granic Allen is committed to a certain vision of the world and in her vision of the world, government getting out of the way can’t help but be good.

Christine Elliot has (and this is a credit to her) shown that she’s not very ideological, in that she can learn how the world really works and subordinate ideology to truth, even when inconvenient. This would make her a more effective premier than either Granic Allen or Ford, but might hurt her in the leadership race. I’ve seen her freeze a couple times when she’s faced with defending how the world really works to an audience that is ideologically prevented from acknowledging the truth.

(See for example, the look on her face when she was forced to defend her vote to ban conversion therapy. Elliot’s real defense of that bill probably involves phrases like “stuck in the past”, “ignorant quacks” and “vulnerable children who need to be protected from people like you”. But she knew that a full-throated defense of gender dysphoria as a legitimate problem wouldn’t win her any votes in this race.)

As Joseph Heath has pointed out, this tension between reality and ideology is responsible for the underrepresentation of modern conservatives among academics. Since the purpose of the academy is (broadly) truth-seeking, we shouldn’t be surprised to see it select against an ideology that explicitly rejects not only the veracity of much of the products of this truth seeking (see, for example, Granic Allen’s inability to clearly state that humans are causing climate change) but the worthwhileness of the whole endeavour of truth seeking.

When everything is trivially knowable via the proper application of “common-sense”, there’s no point in thinking deeply. There’s no point in experts. You just figure out what’s right and you do it. Anything else just confuses the matter and leaves the “little guy” to get shafted by the elites.

Third, the carbon tax has produced a stark, unvoiced split between the candidates. On paper, all are opposing it. In reality, only Ford and Granic Allen seriously believe they have any chance at stopping it. I’m fairly sure that Elliot and Mulroney plan to mount a token opposition, then quickly fold when they’re reminded that raising taxes and giving money to provinces is a thing the Federal Government is allowed to do. This means that they’re counting on money from the carbon tax to balance their budget proposals. They can’t say this, because Ford and Granic Allen are forcing them to the right here, but I would bet that they’re privately using it to reassure fiscally conservative donors about the deficit.

Being unable to discuss what is actually the centrepiece of their financial plans leaves Elliot and Mulroney unable to give very good information about how they plan to balance the budget. They have to fall back on empty phrases like “line by line by line audit” and “efficiencies”, because anything else feels like political suicide.

This shows just how effective Granic Allen has been at being a voice for the grassroots. By staking out positions that resonate with the base, she’s forcing other leadership contestants to endorse them or risk losing to her. Note especially how she’s been extracting promises from Elliot and Mulroney whenever possible – normally around things she knows they don’t want to agree to but that play well with the base. By doing this, she hopes to remove much of their room to maneuver in the general election and prevent any big pivot to centre.

Whether this will work really depends on how costly politicians find breaking promises. Conventional wisdom holds that they aren’t particularly bothered by it. I wonder if Granic Allen’s idealism blinds her to this fact. I’m certainly sure that she wouldn’t break a promise except under the greatest duress.

On the left, it’s very common to see a view of politics that emphasizes pure and moral people. The problem with the system, says the communist, is that we let greedy people run it. If we just replaced them all with better people, we’d get a fair society. Granic Allen is certainly no communist. But she does seem to believe in the “just need good people” theory of government – and whether she wins or loses, she’s determined to bring all the other candidates with her.

This isn’t an incrementalist approach, which is why it feels so foreign to people like me. Granic Allen seems to be making the decision that she’d rather the Conservatives lose (again!) to the Liberals than that they win without a firm commitment to do things differently.

The conflict in the Ontario Conservative party ­– the conflict that was surfaced when his rivals torpedoed Patrick Brown – is around how far the party is willing to go to win. The Ontario Conservatives aren’t the first party to go through this. When UK Labour members picked Jeremy Corbyn, they clearly threw electability behind ideological purity.

In the Ontario PC party, Allen and Ford have clearly staked out a position emphasizing purity. Mulroney and Elliot have just as clearly chosen to emphasize success. Now it’s up to the members. I’m very interested to see what they decide.

Economics, Model, Quick Fix

Not Just Zoning: Housing Prices Driven By Beauty Contests

No, this isn’t a post about very pretty houses or positional goods. It’s about the type of beauty contest described by John Maynard Keynes.

Imagine a newspaper that publishes one hundred pictures of strapping young men. It asks everyone to send in the names of the five that they think are most attractive. They offer a prize: if your selection matches the five men most often appearing in everyone else’s selections, you’ll win $500.

You could just do what the newspaper asked and send in the names of those men that you think are especially good looking. But that’s not very likely to give you the win. Everyone’s tastes are different and the people you find attractive might not be very attractive to anyone else. If you’re playing the game a bit smarter, you’ll instead pick the five people that you think have the broadest appeal.

You could go even deeper and realize that many other people will be trying to win and so will also be trying to pick the most broadly appealing people. Therefore, you should pick people that you think most people will view as broadly appealing (which differs from picking broadly appealing people if you know something about what most people find attractive that isn’t widely known). This can go on indefinitely (although Yudkowsky’s Law of Ultrafinite Recursion states that “In practice, infinite recursions are at most three levels deep“, which gives me a convenient excuse to stop before this devolves into “I know you know I know that you know that…” ad infinitum).

This thought experiment was relevant to an economist because many assets work like this. Take gold: its value cannot to be fully explained by its prettiness or industrial usefulness; some of its value comes from the belief that someone else will want it in the future and be willing to pay more for it than they would a similarly useful or pretty metal. For whatever reason, we have a collective delusion that gold is especially valuable. Because this delusion is collective enough, it almost stops being a delusion. The delusion gives gold some of its value.

When it comes to houses, beauty contests are especially relevant in Toronto and Vancouver. Faced with many years of steadily rising house prices, people are willing to pay a lot for a house because they believe that they can unload it on someone else in a few years or decades for even more.

When talking about highly speculative assets (like Bitcoin), it’s easy to point out the limited intrinsic value they hold. Bitcoin is an almost pure Keynesian Beauty Contest asset, with most of its price coming from an expectation that someone else will want it at a comparable or better price in the future. Houses are obviously fairly intrinsically valuable, especially in very desirable cities. But the fact that they hold some intrinsic value cannot by itself prove that none of their value comes from beliefs about how much they can be unloaded for in the future – see again gold, which has value both as an article of commerce and as a beauty contest asset.

There’s obviously an element of self-fulfilling prophecy here, with steadily increasing house prices needed to sustain this myth. Unfortunately, the housing market seems especially vulnerable to this sort of collective mania, because the sunk cost fallacy makes many people unwilling to sell their houses at a price below what they paid for it. Any softening of the market removes sellers, which immediately drives up prices again. Only a massive liquidation event, like we saw in 2007-2009 can push enough supply into the market to make prices truly fall.

But this isn’t just a self-fulfilling prophecy. There’s deliberateness here as well. To some extent, public policy is used to guarantee that house prices continue to rise. NIMBY residents and their allies in city councils deliberately stall projects that might affect property values. Governments provide tax credits or access to tax-advantaged savings accounts for homes. In America, mortgage payments provide a tax credit!

All of these programs ultimately make housing more expensive wherever supply cannot expand to meet the artificially increased demand – which basically describes any dense urban centre. Therefore, these home buying programs fail to accomplish their goal of making house more affordable, but do serve to guarantee that housing prices will continue to go up. Ultimately, they really just represent a transfer of wealth from taxpayers generally to those specific people who own homes.

Unfortunately, programs like this are very sticky. Once people buy into the collective delusion that home prices must always go up, they’re willing to heavily leverage themselves to buy a home. Any dip in the price of homes can wipe out the value of this asset, making it worth less than the money owed on it. Since this tends to make voters very angry (and also lead to many people with no money) governments of all stripes are very motivated to avoid it.

This might imply that the smart thing is to buy into the collective notion that home prices always go up. There are so many people invested in this belief at all levels of society (banks, governments, and citizens) that it can feel like home prices are too important to fall.

Which would be entirely convincing, except, I’m pretty sure people believed that in 2007 and we all know how that ended. Unfortunately, it looks like there’s no safe answer here. Maybe the collective mania will abate and home prices will stop being buoyed ever upwards. Or maybe they won’t and the prices we currently see in Toronto and Vancouver will be reckoned cheap in twenty years.

Better zoning laws can help make houses cheaper. But it really isn’t just zoning. The beauty contest is an important aspect of the current unaffordability.