Economics, History, Politics

A Cross of Gold: The Best Speech You’ve Never Heard

Friends, lend me your ears.

I write today about a speech that was once considered the greatest political speech in American history. Even today, after Reagan, Obama, Eisenhower, and King, it is counted among the very best. And yet this speech has passed from the history we have learned. Its speaker failed in his ambitions and the cause he championed is so archaic that most people wouldn’t even understand it.

I speak of Congressman Will J Bryan’s “Cross of Gold” speech.

William Jennings Bryan was a congressman from Nebraska, a lawyer, a three-time Democratic candidate for president (1896, 1900, 1908), the 41st Secretary of State, and oddly enough, the lawyer for the prosecution at the Scopes Monkey Trial. He was also a “silver Democrat”, one of the insurgents who rose to challenge Democratic President Grover Cleveland and the Democratic party establishment over their support for gold over a bimetallic (gold plus silver) currency system.

The dispute over bimetallic currency is now more than a hundred years old and has been made entirely moot by the floating US dollar and the post-Bretton Woods international monetary order. Still, it’s worth understanding the debate about bimetallism, because the concerns Bryan’s speech raised are still concerns today. Once you understand why Bryan argued for what he did, this speech transforms from dusty history into still-relevant insights into live issues that our political process still struggles to address.

When Alexander Hamilton was setting up a currency system for the United States, he decided that there would be a bimetallic standard. Both gold and silver currency would be issued by the mint, with the US Dollar specified in terms of both metals. Any citizen could bring gold or silver to the mint and have it struck into coins (for a small fee, which covered operating costs).

Despite congressional attempts to tweak the ratio between the metals, problems often emerged. Whenever gold was worth more by weight than it was as currency, it would be bought using silver and melted down for profit. Whenever the silver dollar was undervalued, the same thing happened to it. By 1847, the silver in coins was so overvalued that silver coinage had virtually disappeared from circulation and many people found themselves unable to complete low-value transactions.

Congress responded by debasing silver coins, which led to an increase in the supply of coins and for a brief time, there was a stable equilibrium where people actually could find and use silver coins. Unfortunately, the equilibrium didn’t last and the discovery of new silver deposits swung things in the opposite direction, leading to fears that people would use silver to buy gold dollars and melt them down outside the country. Since international trade was conducted in gold, it would have been very bad for America had all the gold coins disappeared.

Congress again responded, this time by burying the demonetization of several silver coins (including the silver dollar) in a bill that was meant to modernize the mint. The logic here was that no one would be able to buy up any significant amount of gold if they had to do it in nickels. Unfortunately for congress, a depression happened right after they passed the bill.

Some people blamed the depression on the change in coinage and popular sentiment in some corners became committed to the re-introduction of the silver dollar.

The silver supplies that caused this whole fracas hadn’t gone anywhere. People knew that re-introducing silver would have been an inflationary measure, as the statutory amount of silver in a dollar would have been worth about $0.75 in gold backed currency, but they largely didn’t care – or viewed that as a positive. The people clamouring for silver also didn’t conduct much international trade, so they didn’t mind if silver currency drove out gold and made trade difficult.

There were attempts to remonetize the silver dollar over the next twenty years, but they were largely unsuccessful. A few mine owners found markets for their silver at the mint when law demanded a series of one-off runs of silver coins, but congress never restored bimetallism to the point that there was any significant silver in circulation – or significant inflation. Even these limited silver-minting measures were repealed in 1893, which left the United States on a de facto gold standard.

For many, the need for silver became more urgent after the Panic of 1893, which featured everything a good Gilded Age panic normally did – bank runs, failing railways, declines in trade, credit crunches, a crash in commodity prices, and the inevitable run on the US gold reserves.

The commodity price crash hit farmers especially hard. They were heavily indebted and had no real way to pay it off – unless their debts were reduced by inflation. Since no one had found any large gold deposits anywhere (the Klondike gold rush didn’t actually produce anything until 1898 and the Fairbanks gold rush didn’t occur until 1902), that wasn’t going to happen on the gold standard. The Democrat grassroots quickly embraced bimetallism, while the party apparatus remained supporters of the post-1893 de facto gold standard.

This was the backdrop for Bryan’s Cross of Gold speech, which took place during summer 1896 at the Democratic National Convention in Chicago. He was already a famed orator and had been petitioning members of the party in secret for the presidential nomination, but his plans weren’t well known. He managed to go almost the entire convention without giving a speech. Then, once the grassroots had voted out the old establishment and began hammering out the platform, he arranged to be the closing speaker representing the delegates (about 66% of the total) who supported official bimetallism.

The convention had been marked by a lack of any effective oratory. In a stunning ten-minute speech (that stretched much longer because of repeated minutes-long interruptions for thunderous applause) Bryan singlehandedly changed that and won the nomination.

And this whole thing, the lobbying before the convention and the carefully crafted surprise moment, all of it makes me think of how effective Aaron Swartz’s Theory of Change idea can be when executed correctly.

Theory of Change says that if there’s something you want to accomplish, you shouldn’t start with what you’re good at and work towards it. You should start with the outcome you want and keep asking yourself how you’ll accomplish it.

Bryan decided that he wanted America to have a bimetallic currency. Unfortunately, there was a political class united in its opposition to this policy. That meant he needed a president that favoured it. Without the president, you need to get 66% of Congress and the Senate onboard and that clearly wasn’t happening with the country’s elites so hostile to silver.

Okay, well how do you get a president who’s in favour of restoring silver as currency? You make sure one of the two major parties nominates a candidate in favour of it, first of all. Since the Republicans (even then the party of big business) weren’t going to do it, it had to be the Democrats.

That means the question facing Bryan became: “how do you get the Democrats to pick a presidential candidate that supports silver?”

And this question certainly wasn’t easy. Bryan on his own couldn’t guarantee it, because it required delegates at least sympathetic to the idea. But there was a national backdrop such that that seemed likely, as long as there was a good candidate all of the “silver men” could unite around.

So, Bryan needed to ensure there was a good candidate and that that candidate got elected. Well, that was a problem, because neither of the two leading silver candidates were very popular. Luckily, Bryan was a Democrat, a former congressman, and kind of popular.

I think this is when the plan must have crystalized. Bryan just needed to deliver a really good speech to an already receptive audience. With the cachet from an excellent speech, he would clearly become the choice of silver supporting Democrats, become the Democratic party presidential candidate, and win the presidency. Once all that was accomplished, silver coins would become money again.

The fantastic thing is that it almost worked. Bryan was nominated on the Democratic ticket, absorbed the Populist party into the Democratic party to prevent a vote split, and came within 600,000 votes of winning the presidency. All because of a plan. All because of a speech.

So, what did he say?

Well, the full speech is available here. I do really recommend it. But I want to highlight three specific parts.

A Too Narrow Definition of “Business”

We say to you that you have made the definition of a business man too limited in its application. The man who is employed for wages is as much a business man as his employer; the attorney in a country town is as much a business man as the corporation counsel in a great metropolis; the merchant at the cross-roads store is as much a business man as the merchant of New York; the farmer who goes forth in the morning and toils all day—who begins in the spring and toils all summer—and who by the application of brain and muscle to the natural resources of the country creates wealth, is as much a business man as the man who goes upon the board of trade and bets upon the price of grain; the miners who go down a thousand feet into the earth, or climb two thousand feet upon the cliffs, and bring forth from their hiding places the precious metals to be poured into the channels of trade are as much business men as the few financial magnates who, in a back room, corner the money of the world. We come to speak of this broader class of business men.

In some ways, this passage is as much the source of the mythology of the American Dream as the inscription on the statue of liberty. Bryan rejects any definition of businessman that focuses on the richest in the coastal cities and instead substitutes a definition that opens it up to any common man who earns a living. You can see echoes of this paragraph in almost every presidential speech by almost every presidential candidate.

Think of anyone you’ve heard running for president in recent years. Now read the following sentence in their voice: “Small business owners – like Monica in Texas – who are struggling to keep their business running in these tough economic times need all the help we can give them”. It works because “small business owners” has become one of the sacred cows of American rhetoric.

Bryan added this line just days before he delivered the speech. It was the only part of the whole thing that was at all new. And because this speech inspired a generation of future speeches, it passed into the mythology of America.

Trickle Down or Trickle Up

Mr. Carlisle said in 1878 that this was a struggle between “the idle holders of idle capital” and “the struggling masses, who produce the wealth and pay the taxes of the country”; and, my friends, the question we are to decide is: Upon which side will the Democratic party fight; upon the side of “the idle holders of idle capital” or upon the side of “the struggling masses”? That is the question which the party must answer first, and then it must be answered by each individual hereafter. The sympathies of the Democratic party, as shown by the platform, are on the side of the struggling masses who have ever been the foundation of the Democratic party. There are two ideas of government. There are those who believe that, if you will only legislate to make the well-to-do prosperous, their prosperity will leak through on those below. The Democratic idea, however, has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.

Almost a full century before Reagan’s trickle-down economics, Democrats were taking a stand against that entire world-view. Through all its changes – from the party of slavery to the party of civil rights, from the party of the Southern farmers to the party of “coastal elites” – the Democratic party has always viewed itself as hewing to this one simple principle. Indeed, the core difference between the Republican party and the Democratic party may be that the Republican party views the role of government to “get out of the way” of the people, while the Democratic party believes that the job of government is to “make the masses prosperous”.

A Cross of Gold

Having behind us the producing masses of this nation and the world, supported by the commercial interests, the laboring interests, and the toilers everywhere, we will answer their demand for a gold standard by saying to them: “You shall not press down upon the brow of labor this crown of thorns; you shall not crucify mankind upon a cross of gold.

This is perhaps the best ending to a speech I have ever seen. Apparently at the conclusion of the address, dead silence endured for several seconds and Bryan worried he had failed. Two police officers in the audience were ahead of the curve and rushed Bryan – so that they could protect him from the inevitable crush.

Bryan turned what could have been a dry, dusty, nitty-gritty issue into the overriding moral question of his day. In fact, by co-opting the imagery of the crown of thorns and the cross, he tapped into the most powerful vein of moral imagery that existed in his society. Invoking the cross, the central mystery and miracle of Christianity cannot but help to put (in a thoroughly Christian society) an issue on a moral footing, as opposed to an intellectual one.

This sort of moral rather than intellectual posture is a hallmark of any insurgency against a technocratic order. Technocrats (myself among them!) like to pretend that we can optimize public policy. It is, to us, often a matter of just finding the solution that empirically provides the greatest good to the greatest number of people. Who could be against that?

But by presupposing that the only moral principle is the greatest good for the greatest number, we obviate moral contemplation in favour of tinkering with numbers and variables.

(The most cutting critique of utilitarianism I’ve ever seen delivered was: “[These problems are] seen in the light of a technical or practical difficulty and utilitarianism appeals to a frame of mind in which technical difficulty, even insuperable technical difficulty, is preferable to moral unclarity, no doubt because it is less alarming.”, a snide remark by the great British ethicist Sir Bernard Williams from his half of Utilitarianism for and against.)

This avoiding-the-question-so-we-can-tinker is a policy that can provoke a backlash like Bryan. Leaving aside entirely the difficulty of truly knowing which policies will have “good” results, there’s the uncomfortable truth that not every policy is positive sum. Even positive sum policies can hurt people. Bryan ran for president because questions of monetary policy aren’t politically neutral.

The gold standard, for all the intellectual arguments behind it, was hurting people. Maybe not a majority of people, but people nonetheless. There’s a whole section of the speech where Bryan points out that the established order cannot just say “changes will hurt my business”, because the current situation was hurting other people’s businesses too.

It is very tempting to write that questions of monetary policy “weren’t” politically neutral. After all, there’s a pretty solid consensus on monetary policy these days (well, except for the neo-Fisherians, but there’s a reason no one listens to them). But even (especially) a consensus among experts can be challenged by legitimate political disagreements. When the Fed chose to pull interest rates low as stimulus for the economy after 2008, it put the needs of people trying to find jobs over those of retired people who held their savings in safe bonds.

If you lower speed limits, you make roads safer for law abiding citizens and less safe for people who habitually speed. If you decriminalize drugs, you protect rich techies who microdose on LSD and hurt people who view decriminalization as license to dabble in opiates.

Even the best intentioned or best researched public policy can hurt people. Even if you (like me) believe in the greatest good for the greatest number of people, you have to remember that. You can’t ever let hurting people be easy or unthinking.

Even though it failed in its original aim and even though the cause it promotes is dead, I want people to remember Bryan’s speech. I especially want people who hold power to remember Bryan’s speech. Bryan chose oratory as his vehicle, his way of standing up for people who were hurt by well-intentioned public policy. In 1896, I might have stood against Bryan. But that doesn’t mean I want his speech and the lessons it teaches to be forgotten. Instead, I view it as a call to action, a call to never turn away from the people you hurt, even when you know you are doing right. A call to not forget them. A call to try and help them too.

All About Me, Politics

What I learned knocking on thousands of doors – thoughts on canvassing

“Hi I’m Zach. I’m out here canvasing for Catherine Fife, Andrea Horwath, and the NDP. I was wondering if Catherine could count on your support this election…” is now a sentence I’ve said hundreds of times.

Ontario had a provincial election on June 7th. I wasn’t fond of the Progressive Conservative (PC) Party’s leader, one Doug Ford, so I did what I could. I joined the PC party to vote for his much more qualified rival, Christine Elliot. When that failed, I volunteered for Waterloo’s NDP Member of Provincial Parliament (MPP), Catherine Fife.

As a volunteer, I knocked on more than a thousand doors and talked to more than two hundred people. I went out canvassing eight times. According to Google Maps and its creepy tracking, I walked about 24 kilometers while doing this (and have still-sore feet to prove it).

Before I started canvassing, I knew basically nothing about it. I knew I’d be knocking on people’s doors, but beyond that, nadda. Would I be trying to convince them? Handing out signs? Asking for money?

The actual experience turned out to be both scarier and more mundane than I imagined, so I’ve decided to document it for other people who might be interested in canvassing but aren’t sure what it entails.

The first thing you need to know about canvassing by foot is that it can be physically draining. Water was a must, as some of the days I canvassed featured 31ºC (88ºF) temperatures, full sunlight, and 70% humidity. I sweated more canvassing than I did hiking in Death Valley a few weeks before. Death Valley was hotter, but as anyone who has experienced a summer in Ontario can attest, humidity is what really makes heat miserable. From what I’ve heard, even the worst summer heat and humidity still beats canvassing in the winter.

The campaign helpfully supplied sunscreen and water bottles. They didn’t provide anything to carry all the leaflets in though. After the first day, I brought a messenger bag along. It turns out carrying hundreds of leaflets for several hours without resting can leave your arms hurting for a week. I only made that mistake once.

(Plus, as the campaign wore on, we switched to smaller literature. Literally every canvasser I talked to was very, very excited by the switch.)

The second thing you need to know about canvassing is that it’s an emotional rollercoaster. Not because of the people, but because of the lack of people.

Depending on the time of day and the neighbourhood, I spoke to somewhere between one person for every five doors I knocked on and one person for every fifteen doors I knocked on. I’d get myself psyched up, mentally rehearse my speech, double check the house number, walk up to it, press the doorbell… then wait foolishly while nothing happened.

Sometimes I suspected the doorbell was broken. When I was pretty sure it was, I’d knock as well. Sometimes the knocking did indeed result in someone answering the door, but most of the time the house was just empty. I did have one person hide behind some equipment in their kitchen as I walked up to the door. They ignored the doorbell and my soft, confused knock. I saw them checking if the coast was clear as I trudged away from the front step.

The constant build-up of energy, followed by the all-to-common let down and dejected walk back to the sidewalk exhausted me more than talking with people did. More than half of the people we talked to were supporting our candidate or leaning towards her (she won the vote with 51% support, a crushing margin in a system where many candidates win with support just over 40%) so a majority of my conversation were energizing. It’s fun to discover shared purpose with strangers.

I can’t tell you how much I was grateful to all of the strangers I talked to. I know intellectually that some people really dislike the NDP and don’t like anything it stands for, but you wouldn’t know it from telling more than 200 random people whose dinner you just interrupted that you support the NDP. Not one single person said a mean thing to me.

Many were annoyed by the state of politics. Some didn’t like the party’s policies. Some weren’t interested in politics. But everyone heard me out politely. Some quickly asked me to leave, but no one slammed a door in my face. One man did close his door in my face, but not even the most uncharitable person couldn’t call it a slam. Besides, he said bye and made sure I wasn’t going to be hit by the door.

Many people followed up “sorry, I’m voting for the conservatives”, with “but good luck out there”. Several people asked if I needed a break, some shade, some water. Maybe things would have been different if I’d been out for the Liberals (who were deeply unpopular after 15 years governing) or the Conservatives (with their polarizing leader), but as it was I was impressed by the kindness and politeness of my fellow citizens.

(If you see a canvasser on your doorstep and don’t agree with their party’s positions, please be nice to them. They’re doing what they’re doing out of a sincere desire to make the world a better place. Even if you think they’re misguided, you aren’t going to change anything by being nasty to them. On the flip side, if you find yourself canvassing, it will never be in your interest to be nasty to anyone. I learned that someone high up in the campaign started volunteering for the NDP when a Conservative candidate was rude and patronizing to him at the door. “Be nice” was the very first rule of canvassing.)

Canvassing really isn’t about convincing people. We had scripts for that, but as far as I know, most people didn’t use them much. The doorstep really isn’t the best place to try and change someone’s political views and the time we would spend trying to convince people was normally considered better spent knocking on more doors.

Our actual objective was to figure out who our supporters were and who was open to being convinced. After each conversation, we’d jot down a level of support, any alternative parties being considered, and any issues the person cared about. We had specific shorthands for common occurrences, like people who were ineligible to vote, who had moved, or who didn’t want to talk to us (if you tell a canvasser not to bother you, they will stop coming to your house; this is a corollary of “be nice”, as the last thing we want is to annoy someone into helping our opponents). We’d also offer people literature about our platform. If no one was home, we’d leave it in the mailbox. I was told the notes we took could influence future phone calls (e.g. if we said “hospitals”, people might be talked with about healthcare policy) or help Catherine when she went canvassing

We were working from lists provided by Elections Ontario and augmented by the party databases. We knew what people had told past canvassers about their support for the NDP, going all the way back to 2012. These lists were correct about 80-90% of the time. Most often, mistakes were the fault of Elections Ontario; they were particularly bad at telling us when people were actually permanent residents and ineligible to vote. Beyond “not home” and “won’t say”, “ineligible to vote” become my third most common annotation.

Part of our job was to update these lists for the next election. That entailed asking for names, if someone new was living there and verifying phone numbers. I hated verifying phone numbers. I understand the necessity behind it, I really do, but it was far and away the most awkward part of canvassing. Right when every social instinct I had was telling me my interaction with someone was over, I had to ask for a piece of information they probably didn’t want to give me. I’m sure I’ll get used to it – the experienced canvasser who taught me the ropes was particularly adept at asking for numbers – but it was far and away my least favourite part.

Much easier to ask about was advanced polling, signs, and volunteering. These questions only got asked to our strongest supporters, so we knew we were getting a friendly audience. I had three people agree to take signs over my eight days of canvassing, which is less than the experienced canvasser who showed me the ropes got in our first night out. I hope one day to be as good at getting people to show support as he was.

 

What else? Kids are the best part. I got to watch as a father explained to his little girl that the NDP wasn’t the type of party that had cake. I got to watch a little girl jump up and down with enthusiasm for Catherine. She had seen her at a school visit and thought she was the coolest thing ever. This really struck home the importance of representation in politics to me. Maybe that girl will never lose her admiration and will grow up to seek office herself someday. Would as many girls be able to imagine themselves as MPPs if they only ever saw men in that role?

There were less happy moments. I met a woman who quizzed me in depth on our healthcare platforms before telling me that if that’s what we stood for, we had her vote. Her husband was in the hospital. I saw a notation on a canvassing sheet that said “do not bother – funeral”. I talked with a man who had been turned away at a poll, despite the fact that he was a citizen. I met a mother who relied on the Hydro tax credit to make ends meet.

Their voices were important and I did what I could to make sure they’d be heard, but I can see how people can lose themselves in politics. What is “enough” when someone is hurting in front of you? I like cold equations and cost-benefit analyses. It’s the type of person I am. But when you see someone hurting, all of that flies out of your head and you want to shake the system until someone helps them.

Or at least, I wanted to.

The great political theorist Hannah Arendt once said: “And the first thing I’d like to say, you see, is that going along with the rest—the kind of going along that involves lots of people acting together—produces power. So long as you’re alone, you’re always powerless, however strong you may be. This feeling of power that arises from acting together is absolutely not wrong in itself, it’s a general human feeling. But it’s not good, either. It’s simply neutral. It’s something that’s simply a phenomenon, a general human phenomenon that needs to be described as such. In acting in this way, there’s an extreme feeling of pleasure.”

When I read this, the first time, I skimmed over it. To me, the important thing was what she said next, about “merely functioning” and how thinking is a vehicle to doing good, the concerns that defined her work.

But after my second time canvassing, I read this again and I teared up. “How did she know?”, I wondered.

The answer, of course, is that she participated in politics and knew the joys of acting as a group, of organizing, of working together for a common goal, a common good. And I feel so incredibly privileged that I now know that joy, that “extreme pleasure” too.

For that, I’d like to thank everyone in Catherine Fife’s campaign and everyone in Waterloo who put up with me on their doorstep. Thank you, all of you, for being part of what makes politics and representative democracy work.

All About Me, Politics

Political Views for May 2018

I like to keep track of my life over time. I’m an obsessive journaler (and, as this blog can attest, a fairly regular blogger). At the end of every day, I track my mood, my sleep, my productivity, my social life, and how well I did in spaced repetition exercises. Last May, I decided to track one more thing about myself and start a tradition of publishing my Political Compass results yearly.

I’m a bit late this year (I kept the title because I started the post in May) because there’s actual politics happening; I’ve been volunteering for my local MPP’s re-election campaign. Of explanations for being late with a politics related blog post, that might be the best one I ever give.

The Results

Last year, I scored -3.25 on the economic axis and -6.56 on the authority axis.

Canadian results come from The Political Compass’s take on the 2015 Canadian election. Blog commenter Thomas Sm suggests you should take the comparison with a grain of salt and I’m somewhat inclined to believe him.

This year I scored -2.0 on the economic axis and -6.46 on the authority axis, leaving me in the left libertarian camp, although continuing a seemingly inexorable trend towards economic centrism.

While my position on authority has remained virtually unchanged (I’m sure the difference was random fluctuation in how I might answer borderline questions), I do think I have meaningfully (although not largely) different views on economics than I did a year ago and I think there were two key object level updates that drove this change.

The first was the overview of rent-seeking in The Captured Economy (review forthcoming). I was already skeptical of regulation in 2017. The Captured Economy turned this up to 11 when it showed how a lot of regulation actually results in redistribution of wealth from people who are struggling to people who are affluent.

To take just one example, let’s talk about occupational licensing.

Many areas have occupational licensing. You have to complete training and apply to a licensing board in order to practice certain professions. In some cases, this makes sense. You want to know that the person building the bridge you drive across or removing your appendix really does know what they’re doing.

For other professions, the stakes are somewhat lower. There are certainly consequences if your barber doesn’t know what they’re doing. These consequences can even be quite severe if, say, improper sterilization techniques lead you to catch a blood borne disease. It would be reasonable to require all barbers to take a blood borne disease safety course every two years and have them post a proof of completion in their shops.

But that isn’t what we do. What we do is require, by law, barbers and interior decorators to go through more training than EMTs.

There is no conceivable world in which interior designers need to be held to higher regulatory standards than EMTs. None!

This isn’t a knock on barbers. I have seen just how much difference a good haircut can make. I could never in a million years do the job that interior decorators do. What I probably could do is pass a certification course on either subject. Aesthetics, the true marker of a master in either field can rarely be taught or properly judged in a classroom. But that is exactly what a lot of occupational licensing boils down to.

Really, it seems that all most occupational licensing does is raise barriers to many relatively well compensated and respected positions for people who are unlucky enough to have no education beyond a high school diploma or GED. I can see why people already in those professions would want this. Lower supply means that they can charge higher prices. They’re padding their margins at the cost of everyone who doesn’t have the free time or energy to take the necessary licensing classes (like people who work exhausting jobs or lack reliable work schedules).

If there was no occupational licensing, or if licensing was restricted to minimal courses on necessary safety for an occupation, many more people would have access to careers like hairdressing or interior decorating, careers that often pay better and afford more respect than minimum wage jobs; careers where you can be your own boss (if you’re into that sort of thing).

I know of at least one person who found out they were good at giving haircuts, but just couldn’t sit through all the schooling that was needed for a license. So now they’re giving illegal haircuts. They advertise with word of mouth, because they’re good illegal haircuts, but the whole situation, the whole idea that we can have illegal haircuts is ridiculous.

Long ago, I used to think that regulation was mostly about stopping manufacturers from dumping industrial waste in every nearby pristine forest. But now my sense is that the majority of regulation is like the rules making it illegal to give haircuts if you don’t first drop more than $3,500 on school.

The second object level update was learning just how important monetary policy was to the economy.

For a long time, I had accepted the liberal pseudo-Keynesian economical orthodoxy: we need to spend lots of money when times are rough in order to stimulate aggregate demand [1]. Over the past year, I’ve read some monetary policy blogs and have started to understand that things are rather more complicated than the simple concepts I used to parrot.

I still don’t have as rigorous a grounding in economics as I would like (that’s one of the things I’d like to work on this year), but as I begin to learn more, I do find myself shifting to the economic centre because that seems to be where the truth is to be found.

I remain committed to a society that ensures enough for everyone. But I think over the past year I’ve become more disillusioned with the general level of economic literacy on the left (even in relation to myself!) [2] and more skeptical of the left’s ability to create the sort of plenty I still think we’re going to need to ensure human flourishing.

Predictions

Last year, I made six predictions about how my political views would change over the coming year and all of them turned out to be correct [3]. They were:

  1. I will have an economic score > -2.25: 50%
  2. I will have an economic score > -4.25: 80%
  3. My top level economic identity will still be “capitalist”: 80%
  4. I will have an authority score > -7.56: 70%
  5. I will have an authority score < -5.56: 90%
  6. My top level social identity will still be “libertarian”: 90%

I like this as a concept, so I’m going to try it again. My predictions for this year are:

  1. I will still be on the left side of the graph: 80%
  2. I will move further to the right economically: 80%
  3. My position on the social axis will not change by more than 0.5 points: 90%
  4. My top level political identities will not change: 90%
  5. I will actually read an economics textbook before May 2019: 70%

I hope these predictions are more properly calibrated than my last ones!

Footnotes

[1] The common liberal take on this is different from pure Keynesian economics, because they don’t restrict “times are tough” to recessions and depressions. For a lot of modern people who say they just “support Keynesian economics” (and I was one of them), it’s always tough times for someone. ^ 

[2] I basically never see monetary policy even mentioned on the left. My guess is this is because the left largely views this as simple bean counting that is nowhere near as interesting or important as making sure we have lots of spending on social programs. Reality seems to be different, especially when you get it wrong. ^

[3] This is probably more evidence that I’m under-confident. ^

Literature, Politics

Book Review: Enlightenment 2.0

It is a truth universally acknowledged that an academic over the age of forty must be prepared to write a book talking about how everything is going to hell these days. Despite literally no time in history featuring fewer people dying of malaria, dying in childbirth, dying of vaccine preventable illnesses, etc., it is very much in vogue to criticise the foibles of modern life. Heck, Ross Douthat makes a full-time job out of it over at the New York Times.

Enlightenment 2.0 is Canadian academic Joseph Heath’s contribution to the genre. If the name sounds familiar, it’s probably because I’ve referenced him a bunch of times on this blog. I’m very much a fan of his book Filthy Lucre and his shared blog, induecourse.ca. Because of this, I decided to give his book (and only his book) decrying the modern age a try.

Enlightenment 2.0 follows the old Buddhist pattern. It claims that (1) there are problems with contemporary politics, (2) these problems arise because politics has become hostile to reason, (3) there is a way to have a second Enlightenment restore politics to how they were when they were ruled by reason, and (4) that way is to build politics from the ground up that encourage reason.

Now if you’re like me, you groaned when you read the bit about “restoring” politics to some better past state. My position has long been that there was never any shining age of politics where reason reigned supreme over partisanship. Take American politics. They became partisan quickly after independence, occasionally featured duels, and resulted in a civil war before the Republic even turned 100. America has had periods of low polarization, but these seem more incidental and accidental than the true baseline.

(Canada’s past is scarcely less storied; in 1826, a mob of Tories smashed proto-Liberal William Lyon Mackenzie’s printing press and threw the type into Lake Ontario. Tory magistrates refused to press charges. These disputes eventually spiralled into an abortive rebellion and many years of tense political stand-offs.)

What really sets Heath apart is that he bothers to collect theoretical and practical support for a decline in reason. He’s the first person I’ve ever seen explain how reason could retreat from politics even as violence becomes less common and society becomes more complex.

His explanation goes like this: imagine that once every ten years politicians come up with an idea that helps them get elected by short-circuiting reason and appealing to baser instincts. It gets copied and used by everyone and eventually becomes just another part of campaigning. Over a hundred and fifty years, all of this adds up to a political environment that is specifically designed to jump past reason to baser instincts as soon as possible. It’s an environment that is actively hostile to reason.

We have some evidence of a similar process occurring in advertising. If you ever look at an old ad, you’ll see people trying to convince you that their product is the best. Modern readers will probably note a lot of “mistakes” in old ads. For example, they often admit to flaws in the general class of product they’re selling. They always talk about how their product fixes these flaws, but we now know that talking up the negative can leave people with negative affect. Advertising rarely mentions flaws these days.

Can you imagine an ad like this being printed today? Image credit: “Thoth God of Knowledge” on Flickr.

Modern ads are much more likely to try and associate a product with an image, mood, or imagined future life. Cleaning products go with happy families and spotless houses. Cars with excitement or attractive potential mates.

Look at this Rolex ad. It screams: “This man is successful! Wear a Rolex so everyone knows you’re that successful/so that you become that successful.” The goal is to get people to believe that Rolex=success. Rolex’s marketing is so successful that Rolex’s watches are seen as a status marker and luxury good the world over, even though quite frankly they’re kind of ugly. Image copyright Rolex, used here for purposes of criticism.

In Heath’s view, one negative consequence of globalism is that all of the most un-reasonable inventions from around the world get to flourish everywhere and accumulate, in the same way that globalism has allowed all of the worst diseases of the world to flourish.

Heath paints a picture of reason in the modern world under siege in all realms, not just the political. In addition to the aforementioned advertising, Facebook tries to drag you in and keep you there forever. “Free to play” games want to take you for everything you’re worth and employ psychologists to figure out how. Detergent companies wreck your laundry machine by making it as hard as possible to measure the right amount of fabric softener.

(Seriously, have you ever tried to read the lines on the inside of a detergent cap? Everything, from the dark plastic to small font to multiple lines to the wideness of the cap is designed to make it hard to pour the correct amount of liquid for a single load.)

All of this would be worrying enough, but Heath identifies two more trends that represent a threat to a politics of reason.

First is the rise of Common Sense Conservatism. As Heath defines it, Common Sense Conservatism is the political ideology that elevates “common sense” to the principle political decision-making heuristic. “Getting government out of the way of businesses”, “tightening our belts when times are tight”, and “if we don’t burn oil someone else will” are some of the slogans of the movement.

This is a problem because common sense is ill-suited to our current level of civilizational complexity. Political economy is far too complicated to be managed by analogy to a family budget. Successful justice policy requires setting aside retributive instincts and acknowledging just how weak a force deterrence is. International trade is… I’ve read one newspaper article that correctly understood international trade this year and it was written by Paul fucking Krugman, the Nobel Prize winning economist.

As the built environment (Heath defines this as all the technology that now surrounds us) becomes more hostile to reason (think: detergent caps everywhere) and further from what our brains intuitively expect, common sense will give us worse and worse answers to our problems.

That’s not even to talk about coordination problems. Common Sense Conservatism seems inextricably tied to unilateralism and a competitive attitude (after all, it’s “common sense” that if someone else is winning, you must be losing). With many of the hardest problems facing us (global warming, AI, etc.) being co-ordination problems, Common Sense Conservatism specifically degrades the capacity of our political systems to respond to them.

The other problem is Jonathon Haidt. In practical terms, Haidt is much less of a problem than our increasingly hostile technology or the rise of Common Sense Conservatism, but he has spearheaded a potent theoretical attack on reason.

As I mentioned in my review of Haidt’s most important book, The Righteous Mind, Heath describes Haidt’s view of reason as “essentially confabulatory”. The driving point in The Righteous Mind is that a lot of what we consider to be “reason” is in fact post-facto justifications for our actions. Haidt describes his view as if we’re the riders on an elephant. We may think that we’re driving, but we’re actually the junior partner to our vastly more powerful unconscious.

(I’d like to point out that the case for elephant supremacy has collapsed somewhat over the past five years, as psychology increasingly grapples with its replication crisis; many studies Haidt relied upon are now retracted or under suspicion.)

Heath thought (even before some of Haidt’s evidence went the way of the dodo) that this was an incomplete picture and this disagreement forms much of the basis for recommendations made in Enlightenment 2.0.

Heath proposes a modification to the elephant/rider analogy. He’s willing to buy that our conscious mind has trouble resisting our unconscious desires, but he points out that our conscious mind is actually quite good (with a bit of practise) at setting us up so that we don’t have to deal with unconscious desires we don’t want. He likens this to hopping off the elephant, setting up a roadblock, then hopping back, secure in the knowledge that the elephant will have no choice but to go the way we’ve picked out for it.

A practical example: you know how it can be very hard to resist eating a cookie once you have a packet of them in your room? Well, you can actually make it much easier to resist the cookie if you put it somewhere inconveniently far from where you spend most of your time. You can resist it even better if you don’t buy it in the first place. Very few people are willing to drive to the store just because they have a craving for some sugar.

If you have a sweet tooth, it might be hard to resist buying those cookies. But Heath points out that there’s a solution even for this. One of our most powerful resources is each other. If you have trouble not buying unhealthy snacks at the last second, you can go shopping with a friend. You pick out groceries for her from her list and she’ll do the same for you. Since you’re going to be paying with each other’s money and giving everything over to each other at the end, you have no reason to buy sweets. Do this and you don’t have to spend all week trying not eat the cookie.

Heath believes the difference between people who are always productive and always distracted has far more to do with the environments they’ve built than anything innate. This feels at least half-true to me; I know I’m much less able to get things done when I don’t have my whole elaborate productivity system, or when it’s too easy for me to access the news or Facebook. In fact, I saw a dramatic improvement in my productivity – and a dramatic decrease in the amount of time I spent on Facebook – when I set up my computer to block it for a day after I spend fifteen minutes on it, uninstalled it from my phone, and made sure to keep it logged out on my phone’s browser.

(It’s trivially easy for me to circumvent any of these blocks; it takes about fifteen seconds. But that fifteen seconds is to enough to make quickly opening up a tab and being distracted unappealing.)

This all loops back to talking about how the current built environment is hostile to reason – as well as a host of other things that we might like to be better at.

Take lack of sleep. Before reading Enlightenment 2.0, I hadn’t realized just how much of a modern problem this is. During Heath’s childhood, TVs turned off at midnight, everything closed by midnight, and there were no videogames or cell phones or computers. Post-midnight, you could… read? Heath points out that this tends to put people to sleep anyway. Spend time with people already at your house? How often did that happen? You certainly couldn’t call someone and invite them over, because calling people after midnight doesn’t discriminate between those awake and those asleep. Calling a land line after midnight is still reserved for emergencies. Texting people after midnight is much less intrusive and therefore much politer.

Without all the options modern life gives, there wasn’t a whole lot of things that really could keep you up all night. Heath admits to being much worse at sleeping now. Video games and online news conspire to often keep him up later than he would like. Heath is a professor and the author of several books, which means he’s a probably a very self-disciplined person. If he can’t even ignore news and video games and Twitter in favour of a good night’s sleep, what chance do most people have?

Society has changed in the forty some odd years of his life in a way that has led to more freedom, but an unfortunate side effect of freedom is that it often includes the freedom to mess up our lives in ways that, if we were choosing soberly, we wouldn’t choose. I don’t know anyone who starts an evening with “tonight, I’m going to stay up late enough to make me miserable tomorrow”. And yet technology and society conspire to make it all too easy to do this over the feeble objections of our better judgement.

It’s probably too late to put this genie back in its bottle (even if we wanted to). But Heath contends it isn’t too late to put reason back into politics.

Returning reason to politics, to Heath, means building up social and procedural frameworks like the sort that would help people avoid staying up all night or wasting the weekend on social media. In means setting up our politics so that contemplation and co-operation isn’t discouraged and so that it is very hard to appeal to people’s base nature.

Part of this is as simple as slowing down politics. When politicians don’t have time to read what they’re voting on, partisanship and fear drive what they vote for. When they instead have time to read and comprehend legislation (and even better, their constituents have time to understand it and tell their representatives what they think), it is harder to pass bad bills.

When negative political advertisements are banned or limited (perhaps with a total restriction on election spending), fewer people become disillusioned with politics and fewer people use cynicism as an excuse to give politicians carte blanche to govern badly. When Question Period in parliament isn’t filmed, there’s less incentive to volley zingers and talking points back and forth.

One question Heath doesn’t really engage with: just how far is it okay to go to ensure reason has a place in politics? Enlightenment 2.0 never goes out and says “we need a political system that makes it harder for idiots to vote”, but there’s a definite undercurrent of that in the latter parts. I’m also reminded of Andrew Potter’s opposition to referendums and open party primaries. Both of these political technologies give more people a voice in how the country is run, but do tend to lead to instability or worse decisions than more insular processes (like representative parliaments and closed primaries).

Basically, it seems like if we’re aiming for more reasonable politics, then something might have to give on the democracy front. There are a lot of people who aren’t particularly interested in voting with anything more than their base instincts. Furthermore, given that a large chunk of the right has more-or-less explicitly abandoned “reason” in favour of “common sense”, aiming to increase the amount of “reason” in politics certainly isn’t politically neutral.

(I should also mention that many people on the left only care about empiricism and reason when it comes to global warming and are quite happy to pander to feelings on topics like vaccines or rent control. From my personal vantage point, it looks like left-wing political parties have fallen less under the sway of anti-rationalism, but your mileage may vary.)

Perhaps there’s a coalition of people in the centre, scared of the excess of the extreme left and the extreme right that might feel motivated to change our political system to make it more amiable to reason. But this still leaves a nasty taste in my mouth. It still feels like cynical power politics.

While there might not be answers in Enlightenment 2.0 (or elsewhere), I am heartened that this is a question that Heath is at least still trying to engage with.

Enlightenment 2.0 is going to be one of those books that, on a fundamental level, changes how I look at politics and society. I had an inkling that shaping my environment was important and I knew that different political systems lead to different strategies and outcomes. But the effect of Enlightenment 2.0 was to make me so much more aware of this. Whenever I see Google rolling out a new product, I now think about how it’s designed to take advantage of us (or not!). Whenever someone suggests a political reform, I first think about the type of discourse and politics it will promote and which groups and ideologies that will benefit.

(This is why I’m not too sad about Trudeau’s broken electoral reform promises. Mixed member proportional elections actually encourage fragmentation and give extremists an incentive to be loud. First past the post gives parties a strong incentive to squash their extremist wings and I value this in society.)

For that (as well as its truly excellent overview of all the weird ways our brains evolved), I heartily recommend Enlightenment 2.0.

Ethics, Philosophy, Quick Fix

Second Order Effects of Unjust Policies

In some parts of the Brazilian Amazon, indigenous groups still practice infanticide. Children are killed for being disabled, for being twins, or for being born to single mothers. This is undoubtedly a piece of cultural technology that existed to optimize resource distribution under harsh conditions.

Infanticide can be legally practiced because these tribes aren’t bound by Brazilian law. Under legislation, indigenous tribes are bound by the laws in proportion to how much they interact with the state. Remote Amazonian groups have a waiver from all Brazilian laws.

Reformers, led mostly by disabled indigenous people who’ve escaped infanticide and evangelicals, are trying to change this. They are pushing for a law that will outlaw infanticide, register pregnancies and birth outcomes, and punish people who don’t report infanticide.

Now I know that I have in the past written about using the outside view in cases like these. Historically, outsiders deciding they know what is best for indigenous people has not ended particularly well. In general, this argues for avoiding meddling in cases like this. Despite that, if I lived in Brazil, I would support this law.

When thinking about public policies, it’s important to think about the precedents they set. Opposing a policy like this, even when you have very good reasons, sends a message to the vast majority of the population, a population that views infanticide as wrong (and not just wrong, but a special evil). It says: “we don’t care about what is right or wrong, we’re moral relativists who think anything goes if it’s someone’s culture.”

There are several things to unpack here. First, there are the direct effects on the credibility of the people defending infanticide. When you’re advocating for something that most people view as clearly wrong, something so beyond the pale that you have no realistic chance of ever convincing anyone, you’re going to see some resistance to the next issue you take up, even if it isn’t beyond the pale. If the same academics defending infanticide turn around and try and convince people to accept human rights for trans people, they’ll find themselves with limited credibility.

Critically, this doesn’t happen with a cause where it’s actually possible to convince people that you are standing up for what is right. Gay rights campaigners haven’t been cut out of the general cultural conversation. On the contrary, they’ve been able to parlay some of their success and credibility from being ahead of the curve to help in related issues, like trans rights.

There’s no (non-apocalyptic) future where the people of Brazil eventually wake up okay with infanticide and laud the campaigners who stood up for it. But the people of Brazil are likely to wake up in the near future and decide they can’t ever trust the morals of academics who advocated for infanticide.

Second, it’s worth thinking about how people’s experience of justice colours their view of the government. When the government permits what is (to many) a great evil, people lose faith in the government’s ability to be just. This inhibits the government’s traditional role as solver of collective action problems.

We can actually see this manifest several ways in current North American politics, on both the right and the left.

On the left, there are many people who are justifiably mistrustful of the government, because of its historical or ongoing discrimination against them or people who look like them. This is why the government can credibly lock up white granola-crowd parents for failing to treat their children with medically approved medicines, but can’t when the parents are indigenous. It’s also why many people of colour don’t feel comfortable going to the police when they see or experience violence.

In both cases, historical injustices hamstring the government’s ability to achieve outcomes that it might otherwise be able to achieve if it had more credibly delivered justice in the past.

On the right, I suspect that some amount of skepticism of government comes from legalized abortion. The right is notoriously mistrustful of the government and I wonder if this is because it cannot believe that a government that permits abortion can do anything good. Here this hurts the government’s ability to pursue the sort of redistributive policies that would help the worst off.

In the case of abortion, the very real and pressing need for some women to access it is enough for me to view it as net positive, despite its negative effect on some people’s ability to trust the government to solve coordination problems.

Discrimination causes harms on its own and isn’t even justified on its own “merits”. It’s effect on peoples’ perceptions of justice are just another reason it should be fought against.

In the case of Brazil, we’re faced with an act that is negative (infanticide) with several plausible alternatives (e.g. adoption) that allow the cultural purpose to be served without undermining justice. While the historical record of these types of interventions in indigenous cultures should give us pause, this is counterbalanced by the real harms justice faces as long as infanticide is allowed to continue. Given this, I think the correct and utilitarian thing to do is to support the reformers’ effort to outlaw infanticide.

Quick Fix

May The Fourth Be With You

(The following is the text of the prepared puns I delivered at the 30th Bay Area pun off. If you’re ever in the Bay for one, I really recommend it. They have the nicest crowd in the world.)

First: May the Fourth be with you (“and also with you” is how you respond if like me, you grew up Catholic). As you might be able to tell from this shirt, I am religiously devoted to Star Wars. I know a lot about Star Wars, but I’m more of an orthodox fan- I was all about the Expanded Universe, not this reverend-ing stream of Disney sequels.

Pictured: the outfit I wore

They might be popepular, but it seems like all Disney wants is to turn a prophet – just get big fatwas of cash. They don’t care about Allah the history that happened in the books. Just mo-hamme(r)ed out scripts with flashy set piece battles full of Mecca and characters we med-in-a earlier film.

The EU was mostly books and I loved them despite their ridiculousness. Like, in terms of plots, it’s not clear the writers always card’in-all the books; they often passover normal options and have someone kidnap Han and Leia’s kids.

There were so many convert-sations between the two of them, like “do you noahf ark ‘ids are fine” immediately interrupted by formulaic videos from the kids: “Don’t worry about mi-mam it’s alright, this dude who kidnapped us is a total Luther who just wants to Hindu-s you to vote another way in the Senate”. Eventually they figured out a wafer Leia to communion-cate that the kids needed a bodygod. This led them to Sikh out Winter, who came with the recommendation: “no kidnapper will ever get pastor“.

What else? Luke trains under a clone of Emperor Pulpit-een. Leia is like, “bish, open your eyes, dude’s dark” but Luke justifies it with “well, there’s some things vatican teach me”.
Eventually after Leia asks “how could you Judas to us”, Luke snaps out of it and decides he’s having nun of Palpatine’s evil deeds. He con-vent his anger somewhere else. He comes back to the light side and everyone’s pretty willing to ex-schism for everything he did.
Anyway, I’m really sad that the books aren’t canon anymore. I know there are a lot of ram, a danting number, but I hope I have Eided you in appreciating them.

Data Science, Economics, Falsifiable

The Scale of Inequality

When dealing with questions of inequality, I often get boggled by the sheer size of the numbers. People aren’t very good at intuitively parsing the difference between a million and a billion. Our brains round both to “very large”. I’m actually in a position where I get reminded of this fairly often, as the difference can become stark when programming. Running a program on a million points of data takes scant seconds. Running the same set of operations on a billion data points can take more than an hour. A million seconds is eleven and a half days. A billion seconds 31 years.

Here I would like to try to give a sense of the relative scale of various concepts in inequality. Just how much wealth do the wealthiest people in the world possess compared to the rest? How much of the world’s middle class is concentrated in just a few wealthy nations? How long might it take developing nations to catch up with developed nations? How long before there exists enough wealth in the world that everyone could be rich if we just distributed it more fairly?

According to the Forbes billionaire list, there are (as of the time of writing) 2,208 billionaires in the world, who collectively control $9.1 trillion in wealth (1 trillion seconds ago was the year 29691 BCE, contemporaneous with the oldest cave paintings in Europe). This is 3.25% of the total global wealth of $280 trillion.

The US Federal Budget for 2019 is $4.4 trillion. State governments and local governments each spend another $1.9 trillion. Some $700 billion dollars is given to those governments by the Federal government. With that subtracted, total US government spending is projected to be $7.5 trillion next year.

Therefore, the whole world population of billionaires holds assets equivalent to 1.2 years of US government outlays. Note that US government outlays aren’t equivalent to that money being destroyed. It goes to pay salaries or buy equipment. The comparison here is simply to illustrate how private wealth stacks up against the budgets that governments control.

If we go down by a factor of 1000, there are about 15 million millionaires in the world (according to Wikipedia). Millionaires collectively hold $37.1 trillion (13.25% of all global wealth). All of the wealth that millionaires hold would be enough to fund US government spending for five years.

When we see sensational headlines, like “Richest 1% now owns half the world’s wealth“, we tend to think that we’re talking about millionaires and billionaires. In fact, millionaires and billionaires only own about 16.5% of the world’s wealth (which is still a lot for 0.2% of the world’s population to hold). The rest is owned by less wealthy individuals. The global 1% makes $32,400 a year or more. This is virtually identical to the median American yearly salary. This means that almost fully half of Americans are in the global 1%. Canadians now have a similar median wage, which means a similar number are in the global 1%.

To give a sense of how this distorts the global middle class, I used Povcal.net, the World Bank’s online tool for poverty measurement. I looked for the percentage of a country’s population making between 75% and 125% of the median US income (at purchasing power parity, which takes into account cheaper goods and services in developing countries), equivalent to $64-$107US per day (which is what you get when you divide 75% and 125% of the median US wage by 365 – as far as I can tell, this is the procedure that gives us numbers like $1.25 per day income as the threshold for absolute poverty).

I grabbed what I thought would be an interesting set of countries: The G8, BRICS, The Next 11, Australia, Botswana, Chile, Spain, and Ukraine. These 28 countries had – in the years surveyed – a combined population of 5.3 billion people and had among them the 17 largest economies in the world (in nominal terms). You can see my spreadsheet collecting this data here.

The United States had by far the largest estimated middle class (73 million people), followed by Germany (17 million), Japan (12 million), France (12 million), and the United Kingdom (10 million). Canada came next with 8 million, beating most larger countries, including Brazil, Italy, Korea, Spain, Russia, China, and India. Iran and Mexico have largely similar middle-class sizes, despite Mexico being substantially larger. Botswana ended up having a larger middle class than the Ukraine.

This speaks to a couple of problems when looking at inequality. First, living standards (and therefore class distinctions) are incredibly variable from country to country. A standard of living that is considered middle class in North America might not be the same in Europe or Japan. In fact, I’ve frequently heard it said that the North American middle class (particularly Americans and Canadians) consume more than their equivalents in Europe. Therefore, this should be looked at as a comparison of North American equivalent middle class – who, as I’ve already said, are about 50% encompassed in the global 1%.

Second, we tend to think of countries in Europe as generally wealthier than countries in Africa. This isn’t necessarily true. Botswana’s GDP per capita is actually three times larger than Ukraine’s when unadjusted and more than twice as large at purchasing power parity (which takes into account price differences between countries). It also has a higher GDP per capita than Serbia, Albania, and Moldova (even at purchasing power parity). Botswana, Seychelles, and Gabon have per capita GDPs at purchasing power parity that aren’t dissimilar from those possessed by some less developed European countries.

Botswana, Gabon, and Seychelles have all been distinguished by relatively high rates of growth since decolonization, which has by now made them “middle income” countries. Botswana’s growth has been so powerful and sustained that in my spreadsheet, it has a marginally larger North American equivalent middle class than Nigeria, a country approximately 80 times larger than it.

Of all the listed countries, Canada had the largest middle class as a percent of its population. This no doubt comes partially from using North American middle-class standards (and perhaps also because of the omission of the small, homogenous Nordic countries), although it is also notable that Canada has the highest median income of major countries (although this might be tied with the United States) and the highest 40th percentile income. America dominates income for people in the 60th percentile and above, while Norway comes out ahead for people in the 30th percentile or below.

The total population of the (North American equivalent) middle class in these 28 countries was 170 million, which represents about 3% of their combined population.

There is a staggering difference in consumption between wealthy countries and poor countries, in part driven by the staggering difference in the size of middle (and higher classes) – people with income to spend on things beyond immediate survival. According to Trading Economics, the total disposable income of China is $7.84 trillion (dollars are US). India has $2.53 trillion. Canada, with a population almost 40 times smaller than either, has a total disposable income of $0.96 trillion, while America, with a population about four times smaller than either China or India has a disposable income of $14.79 trillion, larger than China and India put together. If China was as wealthy as Canada, its yearly disposable income would be almost $300 trillion, approximately equivalent to the total amount of wealth in the world.

According to Wikipedia, The Central African Republic has the world’s lowest GDP per capita at purchasing power parity, making it a good candidate for the title of “world’s poorest country”. Using Povcal, I was able to estimate the median wage at $1.33 per day (or $485 US per year). If the Central African Republic grew at the same rate as Botswana did post-independence (approximately 8% year on year) starting in 2008 (the last year for which I had data) and these gains were seen in the median wage, it would take until 2139 for it to attain the same median wage as the US currently enjoys. This of course ignores development aid, which could speed up the process.

All of the wealth currently in the world is equivalent to $36,000 per person (although this is misleading, because much of the world’s wealth is illiquid – it’s in houses and factories and cars). All of the wealth currently on the TSX is equivalent to about $60,000 per Canadian. All of the wealth currently on the NYSE is equivalent to about $65,000 per American. In just corporate shares alone, Canada and the US are almost twice as wealthy as the global average. This doesn’t even get into the cars, houses, and other resources that people own in those countries.

If total global wealth were to grow at the same rate as the market, we might expect to have approximately $1,000,000 per person (not inflation adjusted) sometime between 2066 and 2072, depending on population growth. If we factor in inflation and want there to be approximately $1,000,000 per person in present dollars, it will instead take until sometime between 2102 and 2111.

This assumes too much, of course. But it gives you a sense of how much we have right now and how long it will take to have – as some people incorrectly believe we already do – enough that everyone could (in a fair world) have so much they might never need to work.

This is not of course, to say, that things are fair today. It remains true that the median Canadian or American makes more money every year than 99% of the world, and that the wealth possessed by those median Canadians or Americans and those above them is equivalent to that held by the bottom 50% of the world. Many of us, very many of those reading this perhaps, are the 1%.

That’s the reality of inequality.

Data Science, Economics, Falsifiable

Is Google Putting Money In Your Pocket?

The Cambridge Analytica scandal has put tech companies front and centre. If the thinkpieces along the lines of “are the big tech companies good or bad for society” were coming out any faster, I might have to doubt even Google’s ability to make sense of them all.

This isn’t another one of those thinkpieces. Instead it’s an attempt at an analysis. I want to understand in monetary terms how much one tech company – Google – puts into or takes out of everyone’s pockets. This analysis is going to act as a template for some of the more detailed analyses of inequality I’d like to do later, so if you have a comment about methodology, I’m eager to hear it.

Here’s the basics: Google is a large technology company that primarily makes money off of ad revenues. Since Google is a publicly traded company, statistics are easy to come by. In 2016, Google brought in $89.5 billion in revenue and about 89% of that was from advertising. Advertising is further broken down between advertising on Google sites (e.g. Google Search, Gmail, YouTube, Google Maps, etc.) which account for 80% of advertising revenue and advertising on partner sites, which covers the remainder. The remaining 11% is made up of a variety of smaller projects – selling corporate licenses of its GSuite office software, the Google Play Store, the Google Cloud Computing Platform, and several smaller projects.

There are two ways that we can track how Google’s existence helps or hurts you financially. First, there’s the value of the software it provides. Google’s search has become so important to our daily life that we don’t even notice it anymore – it’s like breathing. Then there’s YouTube, which has more high-quality content than anyone could watch in a lifetime. There’s Google Docs, which are almost a full (free!) replacement for Microsoft Office. There’s Gmail, which is how basically everyone I know does their email. And there’s Android, currently the only viable alternative to iOS. If you had to pay for all of this stuff, how much would you be out?

Second, we can look at how its advertising arm has changed the prices of everything we buy. If Google’s advertising system has driven an increase in spending on advertising (perhaps by starting an arms race in advertising, or by arming marketing managers with graphs, charts and metrics that they can use to trigger increased spending), then we’re all ultimately paying for Google’s software with higher prices elsewhere (we could also be paying with worse products at the same prices, as advertising takes budget that would otherwise be used on quality). On the other hand, if more targeted advertising has led to less advertising overall, then everything will be slightly less expensive (or higher quality) than the counterfactual world in which more was spent on advertising.

Once we add this all up, we’ll have some sort of answer. We’ll know if Google has made us better off, made us poorer, or if it’s been neutral. This doesn’t speak to any social benefits that Google may provide (if they exist – and one should hope they do exist if Google isn’t helping us out financially).

To estimate the value of the software Google provides, we should compare it to the most popular paid alternatives – and look into the existence of any other good free alternatives. Because of this, we can’t really evaluate Search, but because of its existence, let’s agree to break any tie in favour of Google helping us.

On the other hand, Google docs is very easy to compare with other consumer alternatives. Microsoft Office Home Edition costs $109 yearly. Word Perfect (not that anyone uses it anymore) is $259.99 (all prices should be assumed to be in Canadian dollars unless otherwise noted).

Free alternatives exist in the form of OpenOffice and LibreOffice, but both tend to suffer from bugs. Last time I tried to make a presentation in OpenOffice I found it crashed approximately once per slide. I had a similar experience with LibreOffice. I once installed it for a friend who was looking to save money and promptly found myself fixing problems with it whenever I visited his house.

My crude estimate is that I’d expect to spend four hours troubleshooting either free alternative per year. Weighing this time at Ontario’s minimum wage of $14/hour and accepting that the only office suite that anyone under 70 ever actually buys is Microsoft’s offering and we see that Google saves you $109 per year compared to Microsoft and $56 each year compared to using free software.

With respect to email, there are numerous free alternatives to Gmail (like Microsoft’s Hotmail). In addition, many internet service providers bundle free email addresses in with their service. Taking all this into account, Gmail probably doesn’t provide much in the way of direct monetary value to consumers, compared to its competitors.

Google Maps is in a similar position. There are several alternatives that are also free, like Apple Maps, Waze (also owned by Google), Bing Maps, and even the Open Street Map project. Even if you believe that Google Maps provides more value than these alternatives, it’s hard to quantify it. What’s clear is that Google Maps isn’t so far ahead of the pack that there’s no point to using anything else. The prevalence of Google Maps might even be because of user laziness (or anticompetitive behaviour by Google). I’m not confident it’s better than everything else, because I’ve rarely used anything else.

Android is the last Google project worth analyzing and it’s an interesting one. On one hand, it looks like Apple phones tend to cost more than comparable Android phones. On the other hand, Apple is a luxury brand and it’s hard to tell how much of the added price you pay for an iPhone is attributable to that, to differing software, or to differing hardware. Comparing a few recent phones, there’s something like a $50-$200 gap between flagship Android phones and iPhones of the same generation. I’m going to assign a plausible sounding $20 cost saved per phone from using Android, then multiply this by the US Android market share (53%), to get $11 for the average consumer. The error bars are obviously rather large on this calculation.

(There may also be second order effects from increased competition here; the presence of Android could force Apple to develop more features or lower its prices slightly. This is very hard to calculate, so I’m not going to try to.)

When we add this up, we see that Google Docs save anyone who does word processing $50-$100 per year and Android saves the average phone buyer $11 approximately every two years. This means the average person probably sees some slight yearly financial benefit from Google, although I’m not sure the median person does. The median person and the average person do both get some benefit from Google Search, so there’s something in the plus column here, even if it’s hard to quantify.

Now, on to advertising.

I’ve managed to find an assortment of sources that give a view of total advertising spending in the United States over time, as well as changes in the GDP and inflation. I’ve compiled it all in a spreadsheet with the sources listed at the bottom. Don’t just take my word for it – you can see the data yourself. Overlapping this, I’ve found data for Google’s revenue during its meteoric rise – from $19 million in 2001 to $110 billion in 2017.

Google ad revenue represented 0.03% of US advertising spending in 2002. By 2012, a mere 10 years later, it was equivalent to 14.7% of the total. Over that same time, overall advertising spending increased from $237 billion in 2002 to $297 billion in 2012 (2012 is the last date I have data for total advertising spending). Note however that this isn’t a true comparison, because some Google revenue comes from outside of America. I wasn’t able to find revenue broken down in greater depth that this, so I’m using these numbers in an illustrative manner, not an exact manner.

So, does this mean that Google’s growth drove a growth in advertising spending? Probably not. As the economy is normally growing and changing, the absolute amount of advertising spending is less important than advertising spending compared to the rest of the economy. Here we actually see the opposite of what a naïve reading of the numbers would suggest. Advertising spending grew more slowly than economic growth from 2002 to 2012. In 2002, it was 2.3% of the US economy. By 2012, it was 1.9%.

This also isn’t evidence that Google (and other targeted advertising platforms have decreased spending on advertising). Historically, advertising has represented between 1.2% of US GDP (in 1944, with the Second World War dominating the economy) and 3.0% (in 1922, during the “roaring 20s”). Since 1972, the total has been more stable, varying between 1.7% and 2.5%. A Student’s T-test confirms (P-values around 0.35 for 1919-2002 vs. 2003-2012 and 1972-2002 vs. 2003-2012) that there’s no significant difference between post-Google levels of spending and historical levels.

Even if this was lower than historical bounds, it wouldn’t necessarily prove Google (and its ilk) are causing reduced ad spending. It could be that trends would have driven advertising spending even lower, absent Google’s rise. All we can for sure is that Google hasn’t caused an ahistorically large change in advertising rates. In fact, the only thing that is clear in the advertising trends is the peak in the early 1920s that has never been recaptured and a uniquely low dip in the 1940s that seems to have obviously been caused by World War II. For all that people talk about tech disrupting advertising and ad-supported businesses, these current changes are still less drastic than changes we’ve seen in the past.

The change in advertising spending during the years Google is growing could be driven by Google and similar advertising services. But it also could be normal year to year variation, driven by trends similar to what have driven it in the past. If I had a Ph. D. in advertising history, I might be able to tell you what those trends are, but from my present position, all I can say is that the current movement doesn’t seem that weird, from a historical perspective.

In summary, it looks like the expected value for the average person from Google products is close to $0, but leaning towards positive. It’s likely to be positive for you personally if you need a word processor or use Android phones, but the error bounds on advertising mean that it’s hard to tell. Furthermore, we can confidently say that the current disruption in the advertising space is probably less severe than the historical disruption to the field during World War II. There’s also a chance that more targeted advertising has led to less advertising spending (and this does feel more likely than it leading to more spending), but the historical variations in data are large enough that we can’t say for sure.

Literature, Model

Does Amateurish Writing Exist

[Warning: Spoilers for Too Like the Lightning]

What marks writing as amateurish (and whether “amateurish” or “low-brow” works are worthy of awards) has been a topic of contention in the science fiction and fantasy community for the past few years, with the rise of Hugo slates and the various forms of “puppies“.

I’m not talking about the learning works of genuine amateurs. These aren’t stories that use big words for the sake of sounding smart (and at the cost of slowing down the stories), or over the top fanfiction-esque rip-offs of more established works (well, at least not since the Wheel of Time nomination in 2014). I’m talking about that subtler thing, the feeling that bubbles up from the deepest recesses of your brain and says “this story wasn’t written as well as it could be”.

I’ve been thinking about this a lot recently because about ¾ of the way through Too Like The Lightning by Ada Palmer, I started to feel myself put off [1]. And the only explanation I had for this was the word “amateurish” – which popped into my head devoid of any reason. This post is an attempt to unpack what that means (for me) and how I think it has influenced some of the genuine disagreements around rewarding authors in science fiction and fantasy [2]. Your tastes might be calibrated differently and if you disagree with my analysis, I’d like to hear about it.

Now, there are times when you know something is amateurish and that’s okay. No one should be surprised that John Ringo’s Paladin of Shadows series, books that he explicitly wrote for himself are parsed by most people as pretty amateurish. When pieces aren’t written explicitly for the author only, I expect some consideration of the audience. Ideally the writer should be having fun too, but if they’re writing for publication, they have to be writing to an audience. This doesn’t mean that they must write exactly what people tell them they want. People can be a terrible judge of what they want!

This also doesn’t necessarily imply pandering. People like to be challenged. If you look at the most popular books of the last decade on Goodreads, few of them could be described as pandering. I’m familiar with two of the top three books there and both of them kill off a fan favourite character. People understand that life involves struggle. Lois McMaster Bujold – who has won more Hugo awards for best novel than any living author – once said she generated plots by considering “what’s the worst possible thing I can do to these people?” The results of this method speak for themselves.

Meditating on my reaction to books like Paladin of Shadows in light of my experiences with Too Like The Lightning is what led me to believe that the more technically proficient “amateurish” books are those that lose sight of what the audience will enjoy and follow just what the author enjoys. This may involve a character that the author heavily identifies with – the Marty Stu or Mary Sue phenomena – who is lovingly described overcoming obstacles and generally being “awesome” but doesn’t “earn” any of this. It may also involve gratuitous sex, violence, engineering details, gun details, political monologuing (I’m looking at you, Atlas Shrugged), or tangents about constitutional history (this is how most of the fiction I write manages to become unreadable).

I realized this when I was reading Too Like the Lightning. I loved the world building and I found the characters interesting. But (spoilers!) when it turned out that all of the politicians were literally in bed with each other or when the murders the protagonist carried out were described in grisly, unrepentant detail, I found myself liking the book a lot less. This is – I think – what spurred the label amateurish in my head.

I think this is because (in my estimation), there aren’t a lot of people who actually want to read about brutal torture-execution or literally incestuous politics. It’s not (I think) that I’m prudish. It seemed like some of the scenes were written to be deliberately off-putting. And I understand that this might be part of the theme of the work and I understand that these scenes were probably necessary for the author’s creative vision. But they didn’t work for me and they seemed like a thing that wouldn’t work for a lot of people that I know. They were discordant and jarring. They weren’t pulled off as well as they would have had to be to keep me engaged as a reader.

I wonder if a similar process is what caused the changes that the Sad Puppies are now lamenting at the Hugo Awards. To many readers, the sexualized violence or sexual violence that can find its way into science fiction and fantasy books (I’d like to again mention Paladin of Shadows) is incredibly off-putting. I find it incredibly off-putting. Books that incorporate a lot of this feel like they’re ignoring the chunk of audience that is me and my friends and it’s hard while reading them for me not to feel that the writers are fairly amateurish. I normally prefer works that meditate on the causes and uses of violence when they incorporate it – I’d put N.K. Jemisin’s truly excellent Broken Earth series in this category – and it seems like readers who think this way are starting to dominate the Hugos.

For the people who previously had their choices picked year after year, this (as well as all the thinkpieces explaining why their favourite books are garbage) feels like an attack. Add to this the fact that some of the books that started winning had a more literary bent and you have some fans of the genre believing that the Hugos are going to amateurs who are just cruising to victory by alluding to famous literary works. These readers look suspiciously on crowds who tell them they’re terrible if they don’t like books that are less focused on the action and excitement they normally read for. I can see why that’s a hard sell, even though I’ve thoroughly enjoyed the last few Hugo winners [3].

There’s obviously an inferential gap here, if everyone can feel angry about the crappy writing everyone else likes. For my part, I’ll probably be using “amateurish” only to describe books that are technically deficient. For books that are genuinely well written but seem to focus more on what the author wants than (on what I think) their likely audience wants, well, I won’t have a snappy term, I’ll just have to explain it like that.

Footnotes

[1] A disclaimer: the work of a critic is always easier than that of a creator. I’m going to be criticizing writing that’s better than my own here, which is always a risk. Think of me not as someone criticizing from on high, but frantically taking notes right before a test I hope to barely pass. ^

[2] I want to separate the Sad Puppies, who I view as people sad that action-packed books were being passed over in favour of more literary ones from the Rabid Puppies, who just wanted to burn everything to the ground. I’m not going to make any excuses for the Rabid Puppies. ^

[3] As much as I can find some science fiction and fantasy too full of violence for my tastes, I’ve also had little to complain about in the past, because my favourite author, Lois McMaster Bujold, has been reliably winning Hugo awards since before I was born. I’m not sure why there was never a backlash around her books. Perhaps it’s because they’re still reliably space opera, so class distinctions around how “literary” a work is don’t come up when Bujold wins. ^

Falsifiable, Physics, Politics

The (Nuclear) International Monitoring System

Under the Partial Test Ban Treaty (PTBT), all nuclear tests except for those underground are banned. Under the Non-Proliferation Treaty (NPT), only the permanent members of the UN Security Council are legally allowed to possess nuclear weapons. Given the public outcry over fallout that led to the PTBT and the worries over widespread nuclear proliferation that led to the NPT, it’s clear that we require something beyond pinky promises to verify that countries are meeting the terms of these treaties.

But how do we do so? How can you tell when a country tests an atomic bomb? How can you tell who did it? And how can one differentiate a bomb on the surface from a bomb in the atmosphere from a bomb in space from a bomb underwater from a bomb underground?

I’m going to focus on two efforts to monitor nuclear weapons: the national security apparatus of the United States and the Comprehensive Test Ban Treaty Organization (CTBTO) Preparatory Commission’s International Monitoring System (IMS). Monitoring falls into five categories: Atmospheric Radionuclide Monitoring, Seismic Monitoring, Space-based Monitoring, Hydroacoustic Monitoring, and Infrasound Monitoring.

Atmospheric Radionuclide Monitoring

Nuclear explosions generate radionuclides, either by dispersing unreacted fuel, as direct products of fission, or by interactions between neutrons and particles in the air or ground. These radionuclides are widely dispersed from any surface testing, while only a few fission products (mainly various radionuclides of the noble gas xenon) can escape from properly conducted underground tests.

For the purposes of minimizing fallout, underground tests are obviously preferred. But because they only emit small amounts of one particular radionuclide, they are much harder for radionuclide monitoring to detect.

Detecting physical particles is relatively easy. There are 80 IMS stations scattered around the world. Each is equipped with an air intake and a filter. Every day, the filter is changed and then prepared for analysis. Analysis involves waiting a day (for irrelevant radionuclides to decay), then reading decay events from the filter for a further day. This gives scientists an idea of what radioactive elements are present.

Any deviations from the baseline at a certain station can be indicative of a nuclear weapon test, a nuclear accident, or changing wind patterns bringing known radionuclides (e.g. from a commercial reactor) to a station where they normally aren’t present. Wind analysis and cross validation with other methods are used to corroborate any suspicious events.

Half of the IMS stations are set up to do the more difficult xenon monitoring. Here air is pumped through a material with a reasonably high affinity for xenon. Apparently activated charcoal will work, but more sophisticated alternatives are being developed. The material is then induced to release the xenon (with activated charcoal, this is accomplished via heating). This process is repeated several times, with the output of each step pumped to a fresh piece of activated charcoal. Multiple cycles ensure that only relatively pure xenon get through to analysis.

Once xenon is collected, isotope analysis must be done to determine which (if any) radionuclides of xenon are present. This is accomplished either by comparing the beta decay of the captured xenon with its gamma decay, or looking directly at gamma decay with very precise gamma ray measuring devices. Each isotope of xenon has a unique half-life (which affects the frequency with which it omits beta- and gamma-rays) and a unique method of decay (which determines if the decay products are primarily alpha-, beta-, or gamma-rays). Comparing the observed decay events to these “fingerprints” allows for the relative abundance of xenon nuclides to be estimated.

There are some background xenon radionuclides from nuclear reactors and even more from medical isotope production (where we create unstable nuclides in nuclear reactors for use in medical procedures). Looking at global background data you can see the medical isotope production in Ontario, Europe, Argentina, Australia and South Africa. I wonder if this background effect makes world powers cautious about new medical isotope production facilities in countries that are at risk of pursuing nuclear weapons. Could Iran’s planned medical isotope complex have been used to mask nuclear tests?

Not content merely to host several monitoring stations and be party to the data of the whole global network of IMS stations, the United States also has the WC-135 “Constant Phoenix” plane, a Boeing C-135 equipped with mobile versions of particulate and xenon detectors. The two WC-135s can be scrambled anywhere a nuclear explosion is suspected to look for evidence. A WC-135 gave us the first confirmation that the blast from the 2006 North Korean nuclear test was indeed nuclear, several days before the IMS station in Yellowknife, Canada confirmed a spike in radioactive xenon and wind modelling pinpointed the probable location as inside North Korea.

Seismic Monitoring

Given that fewer monitoring stations are equipped with xenon radionuclide detectors and that the background “noise” from isotope production can make radioactive xenon from nuclear tests hard to positively identify, it might seem like nuclear tests are easy to hide underground.

That isn’t the case.

A global network of seismometers ensures that any underground nuclear explosion is promptly detected. These are the same seismometers that organizations like the USGS (United States Geological Survey) use to detect and pinpoint earthquakes. In fact, the USGS provides some of the 120 auxiliary stations that the CTBTO can call on to supplement its fifty seismic monitoring stations.

Seismometers are always on, looking for seismic disturbances. Substantial underground nuclear tests produce shockwaves that are well within the detection limit of modern seismometers. The sub-kiloton North Korean nuclear test in 2006 appears to have been registered as equivalent to a magnitude 4.1 earthquake. A quick survey of ongoing earthquakes should probably show you dozens that have been detected that are less powerful than even that small North Korean test.

This probably leads you to the same question I found myself asking, namely: “if earthquakes are so common and these detectors are so sensitive, how can they ever tell nuclear detonations from earthquakes?”

It turns out that underground nuclear explosions might rattle seismometers like earthquakes do, but they do so with characteristics very different from most earthquakes.

First, the waveform is different. Imagine you’re holding a slinky and a friend is holding the other end. There are two mains ways you can create waves. The first is by shaking it from side to side or up and down. Either way, there’s a perspective from which these waves will look like the letter “s”.

The second type of wave can be made by moving your arm forward and backwards, like you’re throwing and catching a ball. These waves will cause moving regions where the slinky is bunched more tightly together and other regions where it is more loosely packed.

These are analogous to the two main types of body waves in seismology. The first (the s-shaped one) is called an S-wave (although the “S” here stands for “shear” or “secondary” and only indicates the shape by coincidence), while the second is called a P-wave (for “pressure” or “primary”).

I couldn’t find a good free version of this, so I had to make it myself. Licensed (like everything I create for my blog) CC-BY-NC-SA v4.0.

 

Earthquakes normally have a mix of P-waves and S-waves, as well as surface waves created by interference between the two. This is because earthquakes are caused by slipping tectonic plates. This slipping gives some lateral motion to the resulting waves. Nuclear explosions lack this side to side motion. The single, sharp impact from them on the surrounding rocks is equivalent to the wave you’d get if you thrust your arm forward while holding a slinky. It’s almost all P-wave and almost no S-wave. This is very distinctive against a background of earthquakes. The CTBTO is kind enough to show what this difference looks like; in this image, the top event is a nuclear test and the bottom event is an earthquake of a similar magnitude in a similar location (I apologize for making you click through to see the image, but I don’t host copyrighted images here).

There’s one further way that the waves from nuclear explosions stand out. They’re caused by a single point source, rather than kilometers of rock. This means that when many seismic stations work together to find the cause of a particular wave, they’re actually able to pinpoint the source of any explosion, rather than finding a broad front like they would for an earthquake.

The fifty IMS stations automatically provide a continuous stream of data to the CTBTO, which sifts through this data for any events that are overwhelmingly P-Waves and have a point source. Further confirmation then comes from the 120 auxiliary stations, which provide data on request. Various national and university seismometer programs get in on this too (probably because it’s good for public relations and therefore helps to justify their budgets), which is why it’s not uncommon to see several estimates of yield soon after seismographs pick up on nuclear tests.

Space Based Monitoring

This is the only type of monitoring that isn’t done by the CTBTO Preparatory Commission, which means that it is handled by state actors – whose interests necessarily veer more towards intelligence gathering than monitoring treaty obligations per se.

The United States began its space based monitoring program in response to the Limited Test Ban Treaty, which left verification explicitly to the major parties involved. The CTBTO Preparatory Commission was actually formed in response to a different treaty, the Comprehensive Test Ban Treaty, which is not fully in force yet (hence why the organization ensuring compliance with it is called the “Preparatory Commission”).

The United States first fulfilled its verification obligations with the Vela satellites, which were equipped with gamma-ray detectors, x-ray detectors, electromagnetic pulse detectors (which can detect the electro-magnetic pulse from high-altitude nuclear detonations) and an optical sensor called a bhangmeter.

Bhangmeters (the name is a reference to a strain of marijuana, with the implied subtext that you’d have to be high to believe they would work) are composed of a photodiode (a device that produces current when illuminated), a timer, and some filtering components. Bhangmeters are set up to look for the distinctive nuclear “double flash“, caused when the air compressed in a nuclear blast briefly obscuring the central fireball.

The bigger a nuclear explosion, the larger the compression and the longer the central fireball is obscured. The timer picks up on this, estimating nuclear yield from the delay between the initial light and its return.

The bhangmeter works because very few natural (or human) phenomena produce flashes that are as bright or distinctive as nuclear detonations. A properly calibrated bhangmeter will filter out continuous phenomena like lightning (or will find them too faint to detect). Other very bright events, like comets breaking up in the upper atmosphere, only provide a single flash.

There’s only been one possible false positive since the bhangmeters went live in 1967; a double flash was detected in the Southern Indian Ocean, but repeated sorties by the WC-135s detected no radionuclides. The event has never been conclusively proved to be nuclear or non-nuclear in origin and remains one of the great unsolved mysteries of age of widespread atomic testing.

By the time of this (possible) false positive, the bhangmeters had also detected 41 genuine nuclear tests.

The Vela satellites are no longer in service, but the key technology they carried (bhangmeters, x-ray detectors, and EMP detectors) lives on in the US GPS satellite constellation, which does double duty as its space-based nuclear sentinels.

One last note of historical errata: when looking into unexplained gamma-ray readings produced by the Vela satellites, US scientists discovered gamma-ray bursts, an energetic astronomical phenomenon associated with supernovas and merging binary stars.

Hydroacoustic Monitoring

Undersea explosions don’t have a double flash, because steam and turbulence quickly obscure the central fireball and don’t clear until well after the fireball has subsided. It’s true that radionuclide detection should eventually turn up evidence of any undersea nuclear tests, but it’s still useful to have a more immediate detection mechanism. That’s where hydroacoustic monitoring comes in.

There are actually two types of hydroacoustic monitoring. There’s six stations that use true underwater monitoring with triplets of hydrophones (so that signal direction can be determined via triangulation) which are very sensitive, but also very expensive (as hydrophones must be installed at a depth of approximately one kilometer, where sound transmission is best). There’s also five land based stations, which use seismographs on steeply sloped islands to detect the seismic waves underwater sounds make when they hit land. Land based monitoring is less accurate, but requires little in the way of specialized hardware, making it much cheaper.

In either case, data is streamed directly to CTBTO headquarters in Vienna, where it is analyzed and forwarded to states that are party to the CTB. At the CTBTO, the signal is split into different channels based on a known library of undersea sounds and explosions are  separated from natural phenomena (like volcanos, tsunamis, and whales) and man-made noises (like gas exploration, commercial shipping, and military drills). Signal processing and analysis – especially of hydrophone data – is a very mature field, so the CTBTO doesn’t lacks for techniques to refine its estimates of events.

Infrasound Monitoring

Infrasound monitoring stations are the last part of the global monitoring system and represent the best way for the CTBTO (rather than national governments with the resources to launch satellites) to detect atmospheric nuclear tests. Infrasound stations try to pick up the very low frequency sound waves created by nuclear explosions – and a host of other things, like volcanos, planes, and mining.

A key consideration with infrasound stations is reducing background noise. For this, being far away from human habitation and blocked from the wind is ideal. Whenever this cannot be accomplished (e.g. there’s very little cover from the wind in Antarctica, where several of the sixty stations are), more infrasound arrays are needed.

The components of the infrasound arrays look very weird.

Specifically, they look like a bunker that tried to eat four Ferris wheels. Each array actually contains three to eight of these monstrosities. From the CTBTO via Wikimedia Commons.

 

 

What you see here are a bunch of pipes that all feed through to a central microbarometer, which is what actually measures the infrasound by detecting slight changes in air pressure. This setup filters out a lot of the wind noise and mostly just lets infrasound through.

Like the hydroacoustic monitoring system, data is sent to the CTBTO in real time and analyzed there, presumably drawing on a similar library of recorded nuclear test detonations and employing many of the same signal processing techniques.

Ongoing research into wind noise reduction might eventually make the whole set of stations much more sensitive than it is now. Still, even the current iteration of infrasound monitoring should be enough to detect any nuclear tests in the lower atmosphere.


The CTBTO has a truly great website that really helped me put together this blog post. They provide a basic overview of the four international monitoring systems I described here (they don’t cover space-based monitoring because it’s outside of their remit), as well as pictures, a glossary, and a primer on the analysis they do. If you’d like to read more about how the international monitoring system works and how it came into being, I recommend visiting their website.

This post, like many of the posts in my nuclear weapon series came about because someone asked me a question about nuclear weapons and I found I couldn’t answer quite as authoritatively as I would have liked. Consequently, I’d like to thank Cody Wild and Tessa Alexanian for giving me the impetus to write this.

This post is part of a series on special topics in nuclear weapons. The index for all of my writing on nuclear weapons can be found here. Previous special topics posts include laser enrichment and the North Korean nuclear program.