Data Science, Economics, Falsifiable

Is Google Putting Money In Your Pocket?

The Cambridge Analytica scandal has put tech companies front and centre. If the thinkpieces along the lines of “are the big tech companies good or bad for society” were coming out any faster, I might have to doubt even Google’s ability to make sense of them all.

This isn’t another one of those thinkpieces. Instead it’s an attempt at an analysis. I want to understand in monetary terms how much one tech company – Google – puts into or takes out of everyone’s pockets. This analysis is going to act as a template for some of the more detailed analyses of inequality I’d like to do later, so if you have a comment about methodology, I’m eager to hear it.

Here’s the basics: Google is a large technology company that primarily makes money off of ad revenues. Since Google is a publicly traded company, statistics are easy to come by. In 2016, Google brought in $89.5 billion in revenue and about 89% of that was from advertising. Advertising is further broken down between advertising on Google sites (e.g. Google Search, Gmail, YouTube, Google Maps, etc.) which account for 80% of advertising revenue and advertising on partner sites, which covers the remainder. The remaining 11% is made up of a variety of smaller projects – selling corporate licenses of its GSuite office software, the Google Play Store, the Google Cloud Computing Platform, and several smaller projects.

There are two ways that we can track how Google’s existence helps or hurts you financially. First, there’s the value of the software it provides. Google’s search has become so important to our daily life that we don’t even notice it anymore – it’s like breathing. Then there’s YouTube, which has more high-quality content than anyone could watch in a lifetime. There’s Google Docs, which are almost a full (free!) replacement for Microsoft Office. There’s Gmail, which is how basically everyone I know does their email. And there’s Android, currently the only viable alternative to iOS. If you had to pay for all of this stuff, how much would you be out?

Second, we can look at how its advertising arm has changed the prices of everything we buy. If Google’s advertising system has driven an increase in spending on advertising (perhaps by starting an arms race in advertising, or by arming marketing managers with graphs, charts and metrics that they can use to trigger increased spending), then we’re all ultimately paying for Google’s software with higher prices elsewhere (we could also be paying with worse products at the same prices, as advertising takes budget that would otherwise be used on quality). On the other hand, if more targeted advertising has led to less advertising overall, then everything will be slightly less expensive (or higher quality) than the counterfactual world in which more was spent on advertising.

Once we add this all up, we’ll have some sort of answer. We’ll know if Google has made us better off, made us poorer, or if it’s been neutral. This doesn’t speak to any social benefits that Google may provide (if they exist – and one should hope they do exist if Google isn’t helping us out financially).

To estimate the value of the software Google provides, we should compare it to the most popular paid alternatives – and look into the existence of any other good free alternatives. Because of this, we can’t really evaluate Search, but because of its existence, let’s agree to break any tie in favour of Google helping us.

On the other hand, Google docs is very easy to compare with other consumer alternatives. Microsoft Office Home Edition costs $109 yearly. Word Perfect (not that anyone uses it anymore) is $259.99 (all prices should be assumed to be in Canadian dollars unless otherwise noted).

Free alternatives exist in the form of OpenOffice and LibreOffice, but both tend to suffer from bugs. Last time I tried to make a presentation in OpenOffice I found it crashed approximately once per slide. I had a similar experience with LibreOffice. I once installed it for a friend who was looking to save money and promptly found myself fixing problems with it whenever I visited his house.

My crude estimate is that I’d expect to spend four hours troubleshooting either free alternative per year. Weighing this time at Ontario’s minimum wage of $14/hour and accepting that the only office suite that anyone under 70 ever actually buys is Microsoft’s offering and we see that Google saves you $109 per year compared to Microsoft and $56 each year compared to using free software.

With respect to email, there are numerous free alternatives to Gmail (like Microsoft’s Hotmail). In addition, many internet service providers bundle free email addresses in with their service. Taking all this into account, Gmail probably doesn’t provide much in the way of direct monetary value to consumers, compared to its competitors.

Google Maps is in a similar position. There are several alternatives that are also free, like Apple Maps, Waze (also owned by Google), Bing Maps, and even the Open Street Map project. Even if you believe that Google Maps provides more value than these alternatives, it’s hard to quantify it. What’s clear is that Google Maps isn’t so far ahead of the pack that there’s no point to using anything else. The prevalence of Google Maps might even be because of user laziness (or anticompetitive behaviour by Google). I’m not confident it’s better than everything else, because I’ve rarely used anything else.

Android is the last Google project worth analyzing and it’s an interesting one. On one hand, it looks like Apple phones tend to cost more than comparable Android phones. On the other hand, Apple is a luxury brand and it’s hard to tell how much of the added price you pay for an iPhone is attributable to that, to differing software, or to differing hardware. Comparing a few recent phones, there’s something like a $50-$200 gap between flagship Android phones and iPhones of the same generation. I’m going to assign a plausible sounding $20 cost saved per phone from using Android, then multiply this by the US Android market share (53%), to get $11 for the average consumer. The error bars are obviously rather large on this calculation.

(There may also be second order effects from increased competition here; the presence of Android could force Apple to develop more features or lower its prices slightly. This is very hard to calculate, so I’m not going to try to.)

When we add this up, we see that Google Docs save anyone who does word processing $50-$100 per year and Android saves the average phone buyer $11 approximately every two years. This means the average person probably sees some slight yearly financial benefit from Google, although I’m not sure the median person does. The median person and the average person do both get some benefit from Google Search, so there’s something in the plus column here, even if it’s hard to quantify.

Now, on to advertising.

I’ve managed to find an assortment of sources that give a view of total advertising spending in the United States over time, as well as changes in the GDP and inflation. I’ve compiled it all in a spreadsheet with the sources listed at the bottom. Don’t just take my word for it – you can see the data yourself. Overlapping this, I’ve found data for Google’s revenue during its meteoric rise – from $19 million in 2001 to $110 billion in 2017.

Google ad revenue represented 0.03% of US advertising spending in 2002. By 2012, a mere 10 years later, it was equivalent to 14.7% of the total. Over that same time, overall advertising spending increased from $237 billion in 2002 to $297 billion in 2012 (2012 is the last date I have data for total advertising spending). Note however that this isn’t a true comparison, because some Google revenue comes from outside of America. I wasn’t able to find revenue broken down in greater depth that this, so I’m using these numbers in an illustrative manner, not an exact manner.

So, does this mean that Google’s growth drove a growth in advertising spending? Probably not. As the economy is normally growing and changing, the absolute amount of advertising spending is less important than advertising spending compared to the rest of the economy. Here we actually see the opposite of what a naïve reading of the numbers would suggest. Advertising spending grew more slowly than economic growth from 2002 to 2012. In 2002, it was 2.3% of the US economy. By 2012, it was 1.9%.

This also isn’t evidence that Google (and other targeted advertising platforms have decreased spending on advertising). Historically, advertising has represented between 1.2% of US GDP (in 1944, with the Second World War dominating the economy) and 3.0% (in 1922, during the “roaring 20s”). Since 1972, the total has been more stable, varying between 1.7% and 2.5%. A Student’s T-test confirms (P-values around 0.35 for 1919-2002 vs. 2003-2012 and 1972-2002 vs. 2003-2012) that there’s no significant difference between post-Google levels of spending and historical levels.

Even if this was lower than historical bounds, it wouldn’t necessarily prove Google (and its ilk) are causing reduced ad spending. It could be that trends would have driven advertising spending even lower, absent Google’s rise. All we can for sure is that Google hasn’t caused an ahistorically large change in advertising rates. In fact, the only thing that is clear in the advertising trends is the peak in the early 1920s that has never been recaptured and a uniquely low dip in the 1940s that seems to have obviously been caused by World War II. For all that people talk about tech disrupting advertising and ad-supported businesses, these current changes are still less drastic than changes we’ve seen in the past.

The change in advertising spending during the years Google is growing could be driven by Google and similar advertising services. But it also could be normal year to year variation, driven by trends similar to what have driven it in the past. If I had a Ph. D. in advertising history, I might be able to tell you what those trends are, but from my present position, all I can say is that the current movement doesn’t seem that weird, from a historical perspective.

In summary, it looks like the expected value for the average person from Google products is close to $0, but leaning towards positive. It’s likely to be positive for you personally if you need a word processor or use Android phones, but the error bounds on advertising mean that it’s hard to tell. Furthermore, we can confidently say that the current disruption in the advertising space is probably less severe than the historical disruption to the field during World War II. There’s also a chance that more targeted advertising has led to less advertising spending (and this does feel more likely than it leading to more spending), but the historical variations in data are large enough that we can’t say for sure.

Literature, Model

Does Amateurish Writing Exist

[Warning: Spoilers for Too Like the Lightning]

What marks writing as amateurish (and whether “amateurish” or “low-brow” works are worthy of awards) has been a topic of contention in the science fiction and fantasy community for the past few years, with the rise of Hugo slates and the various forms of “puppies“.

I’m not talking about the learning works of genuine amateurs. These aren’t stories that use big words for the sake of sounding smart (and at the cost of slowing down the stories), or over the top fanfiction-esque rip-offs of more established works (well, at least not since the Wheel of Time nomination in 2014). I’m talking about that subtler thing, the feeling that bubbles up from the deepest recesses of your brain and says “this story wasn’t written as well as it could be”.

I’ve been thinking about this a lot recently because about ¾ of the way through Too Like The Lightning by Ada Palmer, I started to feel myself put off [1]. And the only explanation I had for this was the word “amateurish” – which popped into my head devoid of any reason. This post is an attempt to unpack what that means (for me) and how I think it has influenced some of the genuine disagreements around rewarding authors in science fiction and fantasy [2]. Your tastes might be calibrated differently and if you disagree with my analysis, I’d like to hear about it.

Now, there are times when you know something is amateurish and that’s okay. No one should be surprised that John Ringo’s Paladin of Shadows series, books that he explicitly wrote for himself are parsed by most people as pretty amateurish. When pieces aren’t written explicitly for the author only, I expect some consideration of the audience. Ideally the writer should be having fun too, but if they’re writing for publication, they have to be writing to an audience. This doesn’t mean that they must write exactly what people tell them they want. People can be a terrible judge of what they want!

This also doesn’t necessarily imply pandering. People like to be challenged. If you look at the most popular books of the last decade on Goodreads, few of them could be described as pandering. I’m familiar with two of the top three books there and both of them kill off a fan favourite character. People understand that life involves struggle. Lois McMaster Bujold – who has won more Hugo awards for best novel than any living author – once said she generated plots by considering “what’s the worst possible thing I can do to these people?” The results of this method speak for themselves.

Meditating on my reaction to books like Paladin of Shadows in light of my experiences with Too Like The Lightning is what led me to believe that the more technically proficient “amateurish” books are those that lose sight of what the audience will enjoy and follow just what the author enjoys. This may involve a character that the author heavily identifies with – the Marty Stu or Mary Sue phenomena – who is lovingly described overcoming obstacles and generally being “awesome” but doesn’t “earn” any of this. It may also involve gratuitous sex, violence, engineering details, gun details, political monologuing (I’m looking at you, Atlas Shrugged), or tangents about constitutional history (this is how most of the fiction I write manages to become unreadable).

I realized this when I was reading Too Like the Lightning. I loved the world building and I found the characters interesting. But (spoilers!) when it turned out that all of the politicians were literally in bed with each other or when the murders the protagonist carried out were described in grisly, unrepentant detail, I found myself liking the book a lot less. This is – I think – what spurred the label amateurish in my head.

I think this is because (in my estimation), there aren’t a lot of people who actually want to read about brutal torture-execution or literally incestuous politics. It’s not (I think) that I’m prudish. It seemed like some of the scenes were written to be deliberately off-putting. And I understand that this might be part of the theme of the work and I understand that these scenes were probably necessary for the author’s creative vision. But they didn’t work for me and they seemed like a thing that wouldn’t work for a lot of people that I know. They were discordant and jarring. They weren’t pulled off as well as they would have had to be to keep me engaged as a reader.

I wonder if a similar process is what caused the changes that the Sad Puppies are now lamenting at the Hugo Awards. To many readers, the sexualized violence or sexual violence that can find its way into science fiction and fantasy books (I’d like to again mention Paladin of Shadows) is incredibly off-putting. I find it incredibly off-putting. Books that incorporate a lot of this feel like they’re ignoring the chunk of audience that is me and my friends and it’s hard while reading them for me not to feel that the writers are fairly amateurish. I normally prefer works that meditate on the causes and uses of violence when they incorporate it – I’d put N.K. Jemisin’s truly excellent Broken Earth series in this category – and it seems like readers who think this way are starting to dominate the Hugos.

For the people who previously had their choices picked year after year, this (as well as all the thinkpieces explaining why their favourite books are garbage) feels like an attack. Add to this the fact that some of the books that started winning had a more literary bent and you have some fans of the genre believing that the Hugos are going to amateurs who are just cruising to victory by alluding to famous literary works. These readers look suspiciously on crowds who tell them they’re terrible if they don’t like books that are less focused on the action and excitement they normally read for. I can see why that’s a hard sell, even though I’ve thoroughly enjoyed the last few Hugo winners [3].

There’s obviously an inferential gap here, if everyone can feel angry about the crappy writing everyone else likes. For my part, I’ll probably be using “amateurish” only to describe books that are technically deficient. For books that are genuinely well written but seem to focus more on what the author wants than (on what I think) their likely audience wants, well, I won’t have a snappy term, I’ll just have to explain it like that.

Footnotes

[1] A disclaimer: the work of a critic is always easier than that of a creator. I’m going to be criticizing writing that’s better than my own here, which is always a risk. Think of me not as someone criticizing from on high, but frantically taking notes right before a test I hope to barely pass. ^

[2] I want to separate the Sad Puppies, who I view as people sad that action-packed books were being passed over in favour of more literary ones from the Rabid Puppies, who just wanted to burn everything to the ground. I’m not going to make any excuses for the Rabid Puppies. ^

[3] As much as I can find some science fiction and fantasy too full of violence for my tastes, I’ve also had little to complain about in the past, because my favourite author, Lois McMaster Bujold, has been reliably winning Hugo awards since before I was born. I’m not sure why there was never a backlash around her books. Perhaps it’s because they’re still reliably space opera, so class distinctions around how “literary” a work is don’t come up when Bujold wins. ^

Falsifiable, Physics, Politics

The (Nuclear) International Monitoring System

Under the Partial Test Ban Treaty (PTBT), all nuclear tests except for those underground are banned. Under the Non-Proliferation Treaty (NPT), only the permanent members of the UN Security Council are legally allowed to possess nuclear weapons. Given the public outcry over fallout that led to the PTBT and the worries over widespread nuclear proliferation that led to the NPT, it’s clear that we require something beyond pinky promises to verify that countries are meeting the terms of these treaties.

But how do we do so? How can you tell when a country tests an atomic bomb? How can you tell who did it? And how can one differentiate a bomb on the surface from a bomb in the atmosphere from a bomb in space from a bomb underwater from a bomb underground?

I’m going to focus on two efforts to monitor nuclear weapons: the national security apparatus of the United States and the Comprehensive Test Ban Treaty Organization (CTBTO) Preparatory Commission’s International Monitoring System (IMS). Monitoring falls into five categories: Atmospheric Radionuclide Monitoring, Seismic Monitoring, Space-based Monitoring, Hydroacoustic Monitoring, and Infrasound Monitoring.

Atmospheric Radionuclide Monitoring

Nuclear explosions generate radionuclides, either by dispersing unreacted fuel, as direct products of fission, or by interactions between neutrons and particles in the air or ground. These radionuclides are widely dispersed from any surface testing, while only a few fission products (mainly various radionuclides of the noble gas xenon) can escape from properly conducted underground tests.

For the purposes of minimizing fallout, underground tests are obviously preferred. But because they only emit small amounts of one particular radionuclide, they are much harder for radionuclide monitoring to detect.

Detecting physical particles is relatively easy. There are 80 IMS stations scattered around the world. Each is equipped with an air intake and a filter. Every day, the filter is changed and then prepared for analysis. Analysis involves waiting a day (for irrelevant radionuclides to decay), then reading decay events from the filter for a further day. This gives scientists an idea of what radioactive elements are present.

Any deviations from the baseline at a certain station can be indicative of a nuclear weapon test, a nuclear accident, or changing wind patterns bringing known radionuclides (e.g. from a commercial reactor) to a station where they normally aren’t present. Wind analysis and cross validation with other methods are used to corroborate any suspicious events.

Half of the IMS stations are set up to do the more difficult xenon monitoring. Here air is pumped through a material with a reasonably high affinity for xenon. Apparently activated charcoal will work, but more sophisticated alternatives are being developed. The material is then induced to release the xenon (with activated charcoal, this is accomplished via heating). This process is repeated several times, with the output of each step pumped to a fresh piece of activated charcoal. Multiple cycles ensure that only relatively pure xenon get through to analysis.

Once xenon is collected, isotope analysis must be done to determine which (if any) radionuclides of xenon are present. This is accomplished either by comparing the beta decay of the captured xenon with its gamma decay, or looking directly at gamma decay with very precise gamma ray measuring devices. Each isotope of xenon has a unique half-life (which affects the frequency with which it omits beta- and gamma-rays) and a unique method of decay (which determines if the decay products are primarily alpha-, beta-, or gamma-rays). Comparing the observed decay events to these “fingerprints” allows for the relative abundance of xenon nuclides to be estimated.

There are some background xenon radionuclides from nuclear reactors and even more from medical isotope production (where we create unstable nuclides in nuclear reactors for use in medical procedures). Looking at global background data you can see the medical isotope production in Ontario, Europe, Argentina, Australia and South Africa. I wonder if this background effect makes world powers cautious about new medical isotope production facilities in countries that are at risk of pursuing nuclear weapons. Could Iran’s planned medical isotope complex have been used to mask nuclear tests?

Not content merely to host several monitoring stations and be party to the data of the whole global network of IMS stations, the United States also has the WC-135 “Constant Phoenix” plane, a Boeing C-135 equipped with mobile versions of particulate and xenon detectors. The two WC-135s can be scrambled anywhere a nuclear explosion is suspected to look for evidence. A WC-135 gave us the first confirmation that the blast from the 2006 North Korean nuclear test was indeed nuclear, several days before the IMS station in Yellowknife, Canada confirmed a spike in radioactive xenon and wind modelling pinpointed the probable location as inside North Korea.

Seismic Monitoring

Given that fewer monitoring stations are equipped with xenon radionuclide detectors and that the background “noise” from isotope production can make radioactive xenon from nuclear tests hard to positively identify, it might seem like nuclear tests are easy to hide underground.

That isn’t the case.

A global network of seismometers ensures that any underground nuclear explosion is promptly detected. These are the same seismometers that organizations like the USGS (United States Geological Survey) use to detect and pinpoint earthquakes. In fact, the USGS provides some of the 120 auxiliary stations that the CTBTO can call on to supplement its fifty seismic monitoring stations.

Seismometers are always on, looking for seismic disturbances. Substantial underground nuclear tests produce shockwaves that are well within the detection limit of modern seismometers. The sub-kiloton North Korean nuclear test in 2006 appears to have been registered as equivalent to a magnitude 4.1 earthquake. A quick survey of ongoing earthquakes should probably show you dozens that have been detected that are less powerful than even that small North Korean test.

This probably leads you to the same question I found myself asking, namely: “if earthquakes are so common and these detectors are so sensitive, how can they ever tell nuclear detonations from earthquakes?”

It turns out that underground nuclear explosions might rattle seismometers like earthquakes do, but they do so with characteristics very different from most earthquakes.

First, the waveform is different. Imagine you’re holding a slinky and a friend is holding the other end. There are two mains ways you can create waves. The first is by shaking it from side to side or up and down. Either way, there’s a perspective from which these waves will look like the letter “s”.

The second type of wave can be made by moving your arm forward and backwards, like you’re throwing and catching a ball. These waves will cause moving regions where the slinky is bunched more tightly together and other regions where it is more loosely packed.

These are analogous to the two main types of body waves in seismology. The first (the s-shaped one) is called an S-wave (although the “S” here stands for “shear” or “secondary” and only indicates the shape by coincidence), while the second is called a P-wave (for “pressure” or “primary”).

I couldn’t find a good free version of this, so I had to make it myself. Licensed (like everything I create for my blog) CC-BY-NC-SA v4.0.

 

Earthquakes normally have a mix of P-waves and S-waves, as well as surface waves created by interference between the two. This is because earthquakes are caused by slipping tectonic plates. This slipping gives some lateral motion to the resulting waves. Nuclear explosions lack this side to side motion. The single, sharp impact from them on the surrounding rocks is equivalent to the wave you’d get if you thrust your arm forward while holding a slinky. It’s almost all P-wave and almost no S-wave. This is very distinctive against a background of earthquakes. The CTBTO is kind enough to show what this difference looks like; in this image, the top event is a nuclear test and the bottom event is an earthquake of a similar magnitude in a similar location (I apologize for making you click through to see the image, but I don’t host copyrighted images here).

There’s one further way that the waves from nuclear explosions stand out. They’re caused by a single point source, rather than kilometers of rock. This means that when many seismic stations work together to find the cause of a particular wave, they’re actually able to pinpoint the source of any explosion, rather than finding a broad front like they would for an earthquake.

The fifty IMS stations automatically provide a continuous stream of data to the CTBTO, which sifts through this data for any events that are overwhelmingly P-Waves and have a point source. Further confirmation then comes from the 120 auxiliary stations, which provide data on request. Various national and university seismometer programs get in on this too (probably because it’s good for public relations and therefore helps to justify their budgets), which is why it’s not uncommon to see several estimates of yield soon after seismographs pick up on nuclear tests.

Space Based Monitoring

This is the only type of monitoring that isn’t done by the CTBTO Preparatory Commission, which means that it is handled by state actors – whose interests necessarily veer more towards intelligence gathering than monitoring treaty obligations per se.

The United States began its space based monitoring program in response to the Limited Test Ban Treaty, which left verification explicitly to the major parties involved. The CTBTO Preparatory Commission was actually formed in response to a different treaty, the Comprehensive Test Ban Treaty, which is not fully in force yet (hence why the organization ensuring compliance with it is called the “Preparatory Commission”).

The United States first fulfilled its verification obligations with the Vela satellites, which were equipped with gamma-ray detectors, x-ray detectors, electromagnetic pulse detectors (which can detect the electro-magnetic pulse from high-altitude nuclear detonations) and an optical sensor called a bhangmeter.

Bhangmeters (the name is a reference to a strain of marijuana, with the implied subtext that you’d have to be high to believe they would work) are composed of a photodiode (a device that produces current when illuminated), a timer, and some filtering components. Bhangmeters are set up to look for the distinctive nuclear “double flash“, caused when the air compressed in a nuclear blast briefly obscuring the central fireball.

The bigger a nuclear explosion, the larger the compression and the longer the central fireball is obscured. The timer picks up on this, estimating nuclear yield from the delay between the initial light and its return.

The bhangmeter works because very few natural (or human) phenomena produce flashes that are as bright or distinctive as nuclear detonations. A properly calibrated bhangmeter will filter out continuous phenomena like lightning (or will find them too faint to detect). Other very bright events, like comets breaking up in the upper atmosphere, only provide a single flash.

There’s only been one possible false positive since the bhangmeters went live in 1967; a double flash was detected in the Southern Indian Ocean, but repeated sorties by the WC-135s detected no radionuclides. The event has never been conclusively proved to be nuclear or non-nuclear in origin and remains one of the great unsolved mysteries of age of widespread atomic testing.

By the time of this (possible) false positive, the bhangmeters had also detected 41 genuine nuclear tests.

The Vela satellites are no longer in service, but the key technology they carried (bhangmeters, x-ray detectors, and EMP detectors) lives on in the US GPS satellite constellation, which does double duty as its space-based nuclear sentinels.

One last note of historical errata: when looking into unexplained gamma-ray readings produced by the Vela satellites, US scientists discovered gamma-ray bursts, an energetic astronomical phenomenon associated with supernovas and merging binary stars.

Hydroacoustic Monitoring

Undersea explosions don’t have a double flash, because steam and turbulence quickly obscure the central fireball and don’t clear until well after the fireball has subsided. It’s true that radionuclide detection should eventually turn up evidence of any undersea nuclear tests, but it’s still useful to have a more immediate detection mechanism. That’s where hydroacoustic monitoring comes in.

There are actually two types of hydroacoustic monitoring. There’s six stations that use true underwater monitoring with triplets of hydrophones (so that signal direction can be determined via triangulation) which are very sensitive, but also very expensive (as hydrophones must be installed at a depth of approximately one kilometer, where sound transmission is best). There’s also five land based stations, which use seismographs on steeply sloped islands to detect the seismic waves underwater sounds make when they hit land. Land based monitoring is less accurate, but requires little in the way of specialized hardware, making it much cheaper.

In either case, data is streamed directly to CTBTO headquarters in Vienna, where it is analyzed and forwarded to states that are party to the CTB. At the CTBTO, the signal is split into different channels based on a known library of undersea sounds and explosions are  separated from natural phenomena (like volcanos, tsunamis, and whales) and man-made noises (like gas exploration, commercial shipping, and military drills). Signal processing and analysis – especially of hydrophone data – is a very mature field, so the CTBTO doesn’t lacks for techniques to refine its estimates of events.

Infrasound Monitoring

Infrasound monitoring stations are the last part of the global monitoring system and represent the best way for the CTBTO (rather than national governments with the resources to launch satellites) to detect atmospheric nuclear tests. Infrasound stations try to pick up the very low frequency sound waves created by nuclear explosions – and a host of other things, like volcanos, planes, and mining.

A key consideration with infrasound stations is reducing background noise. For this, being far away from human habitation and blocked from the wind is ideal. Whenever this cannot be accomplished (e.g. there’s very little cover from the wind in Antarctica, where several of the sixty stations are), more infrasound arrays are needed.

The components of the infrasound arrays look very weird.

Specifically, they look like a bunker that tried to eat four Ferris wheels. Each array actually contains three to eight of these monstrosities. From the CTBTO via Wikimedia Commons.

 

 

What you see here are a bunch of pipes that all feed through to a central microbarometer, which is what actually measures the infrasound by detecting slight changes in air pressure. This setup filters out a lot of the wind noise and mostly just lets infrasound through.

Like the hydroacoustic monitoring system, data is sent to the CTBTO in real time and analyzed there, presumably drawing on a similar library of recorded nuclear test detonations and employing many of the same signal processing techniques.

Ongoing research into wind noise reduction might eventually make the whole set of stations much more sensitive than it is now. Still, even the current iteration of infrasound monitoring should be enough to detect any nuclear tests in the lower atmosphere.


The CTBTO has a truly great website that really helped me put together this blog post. They provide a basic overview of the four international monitoring systems I described here (they don’t cover space-based monitoring because it’s outside of their remit), as well as pictures, a glossary, and a primer on the analysis they do. If you’d like to read more about how the international monitoring system works and how it came into being, I recommend visiting their website.

This post, like many of the posts in my nuclear weapon series came about because someone asked me a question about nuclear weapons and I found I couldn’t answer quite as authoritatively as I would have liked. Consequently, I’d like to thank Cody Wild and Tessa Alexanian for giving me the impetus to write this.

This post is part of a series on special topics in nuclear weapons. The index for all of my writing on nuclear weapons can be found here. Previous special topics posts include laser enrichment and the North Korean nuclear program.

History, Quick Fix

Against Historical Narratives

There is perhaps no temptation greater to the amateur (or professional) historian than to take a set of historical facts and draw from them a grand narrative. This tradition has existed at least since Gibbon wrote The History of the Decline and Fall of the Roman Empire, with its focus on declining civic virtue and the rise of Christianity.

Obviously, it is true that things in history happen for a reason. But I think the case is much less clear that these reasons can be marshalled like soldiers and made to march in neat lines across the centuries. What is true in one time and place may not necessarily be true in another. When you fall under the sway of a grand narrative, when you believe that everything happens for a reason, you may become tempted to ignore all of the evidence to the contrary.

Instead praying at the altar of grand narratives, I’d like to suggest that you embrace the ambiguity of history, an ambiguity that exists because…

Context Is Tricky

Here are six sentences someone could tell you about their interaction with the sharing economy:

  • I stayed at an Uber last night
  • I took an AirBnB to the mall
  • I deliberately took an Uber
  • I deliberately took a Lyft
  • I deliberately took a taxi
  • I can’t remember which ride-hailing app I used

Each of these sentences has an overt meaning. They describe how someone spent a night or got from place A to place B. They also have a deeper meaning, a meaning that only makes sense in the current context. Imagine your friend told you that they deliberately took an Uber. What does it say about them that they deliberately took a ride in the most embattled and controversial ridesharing platform? How would you expect their political views to differ from someone who told you they deliberately took a taxi?

Even simple statements carry a lot of hidden context, context that is necessary for full understanding.

Do you know what the equivalent statements to the six I listed would be in China? How about in Saudi Arabia? I can tell you that I don’t know either. Of course, it isn’t particularly hard to find these out for China (or Saudi Arabia). You may not find a key written down anywhere (especially if you can only read English), but all you have to do is ask someone from either country and they could quickly give you a set of contextual equivalents.

Luckily historians can do the same… oh. Oh damn.

When you’re dealing with the history of a civilization that “ended” hundreds or thousands of years ago, you’re going to be dealing with cultural context that you don’t fully understand. Sometimes people are helpful enough to write down “Uber=kind of evil” and “supporting taxis = very left wing, probably vegan & goes to protests”. A lot of the time they don’t though, because that’s all obvious cultural context that anyone they’re writing to would obviously have.

And sometimes they do write down even the obvious stuff, only for it all to get burned when barbarians sack their city, leaving us with no real way to understand if a sentence like “the opposing orator wore red” has any sort of meaning beyond a statement of sartorial critique or not.

All of this is to say that context can make or break narratives. Look at the play “Hamilton”. It’s a play aimed at urban progressives. The titular character’s strong anti-slavery views are supposed to code to a modern audience that he’s on the same political team as them. But if you look at American history, it turns out that support for abolishing slavery (and later, abolishing segregation) and support for big corporations over the “little guy” were correlated until very recently. In the 1960s though 1990s, there was a shift such that the Democrats came to stand for both civil rights and supporting poorer Americans, instead of just the latter. Before this shift, Democrats were the party of segregation, not that you’d know it to see them today.

Trying to tie Hamilton into a grander narrative of (eventual) progressive triumph erases the fact that most of the modern audience would strenuously disagree with his economic views (aside from urban neo-liberals, who are very much in Hamilton’s mold). Audiences end up leaving the paly with a story about their own intellectual lineage that is far from correct, a story that may cause them to feel smugly superior to people of other political stripes.

History optimized for this sort of team or political effect turns many modern historians or history writers into…

Unreliable Narrators

Gaps in context, or modern readers missing the true significance of gestures, words, and acts steeped in a particular extinct culture, combined with the fact that it is often impossible to really know why someone in the past did something mean that some of history is always going to be filled in with our best guesses.

Professor Mary Beard really drove this point home for me in her book SPQR. She showed me how history that I thought was solid was often made up of myths, exaggerations, and wishful thinking on the parts of modern authors. We know much less about Rome than many historians had made clear to me, probably because any nuance or alternative explanation would ruin their grand theories.

When it comes to so much of the past, we genuinely don’t know why things happened.

I recently heard two colleagues arguing about The Great Divergence – the unexplained difference in growth rates between Europe and the rest of the world that became apparent in the 1700s and 1800s. One was very confident that it could be explained by access to coal. The other was just as confident that it could be explained by differences in property rights.

I waded in and pointed out that Wikipedia lists fifteen possible explanations, all of which or none of which could be true. Confidence about the cause of the great divergence seems to me a very silly thing. We cannot reproduce it, so all theories must be definitionally unfalsifiable.

But both of my colleagues had read narrative accounts of history. And these narrative accounts had agendas. One wished to show that all peoples had the same inherent abilities and so cast The Great Divergence as chance. The other wanted to show how important property rights are and so made those the central factor in it. Neither gave much time to the other explanation, or any of the thirteen others that a well trafficked and heavily edited Wikipedia article finds equally credible.

Neither agenda was bad here. I am in fact broadly in favour of both. Yet their effect was to give two otherwise intelligent and well-read people a myopic view of history.

So much of narrative history is like this! Authors take the possibilities they like best, or that support their political beliefs the best, or think will sell the best, and write them down as if they are the only possibilities. Anyone who is unlucky enough to read such an account will be left with a false sense of certainty – and in ignorance of all the other options.


Of course, I have an agenda too. We all do. It’s just that my agenda is literally “the truth resists simplicity“. I like the messiness of history. It fits my aesthetic sense well. It’s because of this sense, that I’d like to encourage everyone to make their next foray into history free of narratives. Use Wikipedia or a textbook instead of a bestselling book. Read something by Mary Beard, who writes as much about historiography as she writes about history. Whatever you do, avoid books with blurbs praising the author for their “controversial” or “insightful” new theory.

Leave, just once, behind those famous narrative works like “Guns, Germs, and Steel” or “The History of the Decline and Fall of the Roman Empire” and pick up something that embraces ambiguity and doesn’t bury messiness behind a simple agenda.

Economics, Politics

When To Worry About Public Debt

I watch a lot of political debates with my friends. A couple of them have turned to me after watching heated arguments about public debt and (because I have a well-known habit of reading monetary policy blogs) asked me who is right. I hear questions like:

Is it true that public debt represents an unfair burden on our hypothetical grandchildren? Is all this talk about fiscal discipline and balanced budgets pointless? Is it really bad when public debt gets over 100% of a country’s GDP? How can the threat of defaulting on loans lead to inflation and ruin?

And what does all this mean for Ontario? Is Doug Ford right about the deficit?

This is my attempt to sort this all out in a public and durable form. Now when I’ve taken a political debate drinking game too far, I’ll still be able to point people towards the answers to their questions.

(Disclaimer: I’m not an economist. Despite the research I did for it and the care with which I edited, this post may contain errors, oversimplifications, or misunderstandings.)

Is Public Debt A Burden On Future Generations?

Among politicians of a certain stripe, it’s common to compare the budget of a country to the budget of a family. When a family is budgeting, any shortfall must be paid for via loans. Left unspoken is the fact that many families find themselves in a rather large amount of debt early on – because they need a mortgage to buy their dwelling. The only way a family can ever get out of debt is by maintaining a monthly surplus until their mortgage is paid off, then being careful to avoid taking on too much new debt.

Becoming debt free is desirable to individuals for two reasons. First, it makes their retirement (feel) much more secure. Given that retirement generally means switching to a fixed income or living off savings, it can be impossible to pay off the principle of a debt after someone makes the decision to retire.

Second, parents often desire to leave something behind for their children. This is only possible if their assets outweigh their debts.

Countries have to grapple with neither of these responsibilities. While it is true that the average age in many countries is steadily increasing, countries that have relatively open immigration policies and are attractive to immigrants largely avoid this problem. Look at how Canada and the United States compare to Italy and Japan in working age population percentage, for example.

Graph showing % of working age population in 4 OECD countries: Japan, Canada, USA, Italy.
After seeing this graph, I realized how hyperbolic it was to talk about Japan’s aging population. Source: OECD.

 

Even in Japan, where this is “dire”, the percentage of the population that is working age is equivalent to the percentage of the population that was working age in Canada or America in 1970. As lifespans increase, we may have to expand our definition of working age. But some combination of immigration, better support for parents, and better support for older citizens who wish to keep working will prevent us from ever getting to a point where it’s sensible to talk about a country “retiring”.

Since countries don’t “retire”, they don’t have to cope with the worry of “needing to work later to pay off that debt”. Since countries don’t have children, they don’t have to worry about having something to pass on. Countries don’t ever actually have to pay back all of their debt. They can continue to roll it over indefinitely, as long as someone is willing to continue to loan them money at a rate they’re willing to pay.

What I mean by “rolling over”, is that countries can just get a new loan for the same amount as their last one, as soon as the previous loan comes due. If interest rates have risen (either in general, or because the country is a greater risk) since their last loan, the new loan will be more expensive. If they’ve fallen, it will be cheaper. Rolling over loans changes the interest rate a country is paying, but doesn’t change the amount it owes.

Is Talk Of Discipline Pointless?

No.

Even if countries don’t really ever have to pay back the principle on their loans, they do have to make interest payments (borrowing to pay these is possible, but it isn’t a good look and can pretty quickly lead to dangerous levels of debt). The effect of these payments ranges from “it’s mildly annoying that we can’t spend that money on something better” to “we’re destroying our ecosystem growing bananas so that we have something to sell for cash to make our interest payments”. Lack of discipline and excessive debt levels can move a country closer to the second case.

In a well-integrated and otherwise successful economy with ample room in its governmental budget, interest payments are well worth the advantage of getting money early. When this money is used to create economic benefits that accrue faster than the interest payments, countries are net beneficiaries. If you take out a loan that charges 1-2% interest a year and use it to build a bridge that drives 4% economic growth for the next forty years, you’re ahead by 2-3% year on year. This is a good deal.

Unlike most talk about interest rates, where they’re entirely hypothetical, I really do mean that 1-2% figure. That’s actually higher than the average rate the US government has been paying to borrow over the last decade (Germany had it even better; they briefly paid negative interest rates). Governments – at least those with a relatively good track record around money – really have a superpower with how cheaply they can get money, so if nothing else, it’s worth keeping debt relatively low so that they don’t lose their reputation for responsibility and continue to have access to cheap money for when they really need it.

That’s the case in a moderately disciplined developed nation with adequate foreign reserves, at least. In a cash-poor or underdeveloped economy where a decent portion of any loan is lost to cronyism and waste, the case for loans being positive is much more… mixed. For these countries, discipline means “taking no loans at all”.

When discipline falls apart and debt levels rise too high, very bad things start to happen.

Is 100% of GDP The Line Beyond Which Debt Shouldn’t Rise?

There is nothing special about 100% of GDP, except that people think it is special.

Sometimes, people talk about markets like they’re these big impersonal systems that have no human input. This feels true because the scale of the global financial system is such that from the perspective of pretty much any individual person, they’re impersonal and impossible to really influence. But ultimately, other than a few high frequency trading platforms, all decisions in a market have to be made by humans.

Humans have decided that in certain cases, it’s bad when a country has more than 100% of its GDP in debt. This means that it becomes much more expensive to get new loans (and because of the constant rollover, even old loans eventually become new loans) when a country crosses this Rubicon, which in turn makes them much more likely to default. There’s some element of self-fulfilling prophecy here!

(Obviously there does have to be some point where a country really is at risk from its debt load and obviously this needs to be scaled to country size and wealth to not be useless. I think people have chosen 100% of GDP more because it’s a nice round number and it’s simple to calculate, not because it has particularly great inherent predictive power, absent the power it has as a self-fulfilling prophecy. Maybe the “objectively correct” number is in fact 132.7% of the value of all exports, or 198% of 5-year average government revenues… In either case, we’ve kind of lost our chance; any number calculated now would be heavily biased by the crisis of confidence that can happen when debt reaches 100% of GDP.)

That said, comparing a country’s debt load to its GDP without making adjustments is a recipe for confusion. While Everyone was fretting about Greece having ~125% of its GDP in debt, Japan was carrying 238% of its GDP in debt.

There are two reasons that Japan’s debt is much less worrying than Greece’s.

First, there’s the issue of who’s holding that debt. A very large portion of Japanese debt is held by its own central bank. By my calculations (based off the most recent BOJ numbers), the Bank of Japan is holding approximately 44% of the Japanese government’s debt. Given that the Bank of Japan is an organ of the Japanese Government (albeit an arm’s length one), this debt is kind of owed by the government of Japan, to the government of Japan. When 44% of every loan payment might ultimately find its way back to you, your loan payments become less scary.

Second, there’s the issue of denomination. Greek public debts are denominated in Euros, a currency that Greece doesn’t control. If Greece wants €100, it must collect €100 in taxes from its citizens. Greece cannot just create Euros.

Japanese debt is denominated in Yen. Because Japan controls the yen, it has two options for repaying ¥100 of debt. It can collect ¥100 in taxes – representing ¥100 worth of valuable work. Or it can print ¥100. There are obvious consequences to printing money, namely inflation. But given that Japan has struggled with chronic deflation and has consistently underperformed the inflation targets economists think it needs to meet, it’s clear that a bit of inflation isn’t the worst thing that could happen to it.

When evaluating whether a debt burden is a problem, you should always consider the denomination of the debt, who the debtholders are, and how much inflation a country can tolerate. It is always worse to hold debt in a denomination that you don’t control. It’s always worse to owe money to people who aren’t you (especially people more powerful than you), and it’s always easier to answer debt with inflation when your economy needs more inflation anyways.

This also suggests that government debt is much more troubling when it’s held by a sub-national institution than by a national institution (with the exception of Europe, where even nations don’t individually control the currency). In this case, monetary policy options are normally off the table and there’s normally someone who’s able to force you to pay your debt, no matter what that does to your region.

Developing countries very rarely issue debt in their own currency, mainly because no one is interested in buying it. This, combined with low foreign cash reserves puts them at a much higher risk of failing to make scheduled debt payments – i.e. experiencing an actual default.

What Happens If A Country Defaults?

No two defaults are exactly alike, so the consequences vary. That said, there do tend to be two common features: austerity and inflation.

Austerity happens for a variety of reasons. Perhaps spending levels were predicated on access to credit. Without that access, they can’t be maintained. Or perhaps a higher body mandated it; see for example Germany (well, officially, the EU) mandating austerity in Greece, or Michigan mandating austerity in Detroit.

Inflation also occurs for a variety of reasons. Perhaps the government tries to fill a budgetary shortfall and avoid austerity by printing bills. This flood of money bids up prices, ruins savings and causes real wages to decline. Perhaps it becomes hard to convince foreigners to accept the local currency in exchange for goods, so anything imported becomes very expensive. When many goods are imported, this can lead to very rapid inflation. Perhaps people in general lose faith in money (and so it becomes nearly worthless), maybe in conjunction with the debt crisis expanding to the financial sector and banks subsequently failing. Most likely, it will be some combination of these three, as well as others I haven’t thought to mention.

During a default, it’s common to see standards of living plummet, life savings disappear, currency flight into foreign denominations, promptly followed by currency controls, which prohibit sending cash outside of the country. Currency controls make leaving the country virtually impossible and make any necessary imports a bureaucratic headache. This is fine when the imports in question are water slides, but very bad when they’re chemotherapy drugs or rice.

On the kind of bright side, defaults also tend to lead to mass unemployment, which gives countries experiencing them comparative advantage in any person intensive industry. Commonly people would say “wages are low, so manufacturing moves there”, but that isn’t quite how international trade works. It’s not so much low wages that basic manufacturing jobs go in search of, but a workforce that can’t do anything more productive and less labour intensive. This looks the same, but has the correlation flipped. In either case, this influx of manufacturing jobs can contain within it the seed of later recovery.

If a country has sound economic management (like Argentina did in 2001), a default isn’t the end of the world. It can negotiate a “haircut” of its loans, giving its creditors something less than the full amount, but more than nothing. It might even be able to borrow again in a few years, although the rates that it will have to offer will start out in credit card territory and only slowly recover towards auto-loan territory.

When these trends aren’t managed by competent leadership, or when the same leaders (or leadership culture) that got a country into a mess are allowed to continue, the recovery tends to be moribund and the crises continual. See, for example, how Greece has limped along, never really recovering over the past decade.

Where Does Ontario Fit In?

My own home province of Ontario is currently in the midst of an election and one candidate, Doug Ford, has made the ballooning public debt the centrepiece of his campaign. Evaluating his claims gives us a practical example of how to evaluate claims of this sort in general.

First, Ontario doesn’t control the currency that its debt is issued in, which is an immediate risk factor for serious debt problems. Ontario also isn’t dominant enough within Canada to dictate monetary policy to the Federal Government. Inflation for the sake of saving Ontario would doom any sitting Federal government in every other province, so we can’t expect any help from the central bank.

Debt relief from the Federal government is possible, but it couldn’t come without hooks attached. We’d definitely lose some of our budgetary authority, certainly face austerity, and even then, it might be too politically unpalatable to the rest of the country.

However, the sky is not currently falling. While debt rating services have lost some confidence in our willingness, if not our ability to get spending under control and our borrowing costs have consequently risen, we’re not yet into a vicious downwards spiral. Our debt is at a not actively unhealthy 39% of the GDP and the interest rate is a non-usurious 4%.

That said, it’s increased more quickly than the economy has grown over the past decade. Another decade going on like we currently are certainly would put us at risk of a vicious cycle of increased interest rates and crippling debt.

Doug Ford’s emotional appeals about mortgaging our grandchildren’s future are exaggerated and false. I’ve already explained how countries don’t work like families. But there is a more pragmatic concern here. If we don’t control our spending now, on our terms, someone else – be it lenders in a default or the federal government in a bailout – will do it for us.

Imagine the courts forcing Ontario to service its debt before paying for social services and schools. Imagine the debt eating up a full quarter of the budget, with costs rising every time a loan is rolled over. Imagine our public services cut to the bone and our government paralyzed without workers. Things would get bad and the people who most need a helping hand from the government would be hit the hardest.

I plan to take this threat seriously and vote for a party with a credible plan to balance our budget in the short term.

If one even exists. Contrary to his protestations, Doug Ford isn’t leading a party committed to reducing the deficit. He’s publically pledged himself to scrapping the carbon tax. Absent it, but present the rest of his platform, the deficit spending is going to continue (during a period of sustained growth, no less!). Doug Ford is either lying about what he’s going to cut, or he’s lying about ending the debt. That’s not a gamble I particularly want to play.

I do hope that someone campaigns on a fully costed plan to restore fiscal order to Ontario. Because we are currently on the path to looking a lot like Greece.

Model, Politics, Quick Fix

The Awkward Dynamics of the Conservative Leadership Debates

Tanya Granic Allen is the most idealistic candidate I’ve ever seen take the stage in a Canadian political debate. This presents some awkward challenges for the candidates facing her, especially Mulroney and Elliot.

First, there’s the simple fact of her idealism. I think Granic Allen genuinely believes everything she says. For her, knowing what’s right and what’s wrong is simple. There isn’t a whole lot of grey. She even (bless her) probably believes that this will be an advantage come election time. People overwhelming don’t like the equivocation of politicians, so Granic Allen must assume her unequivocal moral stances will be a welcome change

For many people, it must be. Even for those who find it grating, it seems almost vulgar to attack her. It’s clear that she isn’t in this for herself and doesn’t really care about personal power. Whether she could maintain that innocence in the face of the very real need to make political compromises remains an open question, but for now she does represent a certain vein of ideological conservatism in a form that is unsullied by concerns around electability.

The problem here is that the stuff Granic Allen is pushing – “conscience rights” and “parental choice” – is exactly the sort of thing that can mobilize opposition to the PC party. Fighting against sex-ed and abortion might play well with the base, but Elliot and Mulroney know that unbridled social conservatism is one of the few things that can force the province’s small-l liberals to hold their noses and vote for the big-L Liberal Party. In an election where we can expect embarrassingly low turnout (it was 52% in 2014), this can play a major role.

A less idealistic candidate would temper themselves to help the party in the election. Granic Allen has no interest in doing this, which basically forces the pragmatists to navigate the tricky act of distancing themselves from her popular (with the base) proposals so that they might carry the general election.

Second, there’s the difficult interaction between the anti-rational and anti-empirical “common sense” conservatism pushed by Granic Allen and Ford and the pragmatic, informed conservatism of Elliot and Mulroney.

For Ford and Granic Allen, there’s a moral nature to truth. They live in a just world where something being good is enough to make it true. Mulroney and Elliot know that reality has an anti-partisan bias.

Take clean energy contracts. Elliot quite correctly pointed out that ripping up contracts willy-nilly will lead to a terrible business climate in Ontario. This is the sort of suggestion we normally see from the hard left (and have seen in practice in places the hard left idolizes, like Venezuela). But Granic Allen is committed to a certain vision of the world and in her vision of the world, government getting out of the way can’t help but be good.

Christine Elliot has (and this is a credit to her) shown that she’s not very ideological, in that she can learn how the world really works and subordinate ideology to truth, even when inconvenient. This would make her a more effective premier than either Granic Allen or Ford, but might hurt her in the leadership race. I’ve seen her freeze a couple times when she’s faced with defending how the world really works to an audience that is ideologically prevented from acknowledging the truth.

(See for example, the look on her face when she was forced to defend her vote to ban conversion therapy. Elliot’s real defense of that bill probably involves phrases like “stuck in the past”, “ignorant quacks” and “vulnerable children who need to be protected from people like you”. But she knew that a full-throated defense of gender dysphoria as a legitimate problem wouldn’t win her any votes in this race.)

As Joseph Heath has pointed out, this tension between reality and ideology is responsible for the underrepresentation of modern conservatives among academics. Since the purpose of the academy is (broadly) truth-seeking, we shouldn’t be surprised to see it select against an ideology that explicitly rejects not only the veracity of much of the products of this truth seeking (see, for example, Granic Allen’s inability to clearly state that humans are causing climate change) but the worthwhileness of the whole endeavour of truth seeking.

When everything is trivially knowable via the proper application of “common-sense”, there’s no point in thinking deeply. There’s no point in experts. You just figure out what’s right and you do it. Anything else just confuses the matter and leaves the “little guy” to get shafted by the elites.

Third, the carbon tax has produced a stark, unvoiced split between the candidates. On paper, all are opposing it. In reality, only Ford and Granic Allen seriously believe they have any chance at stopping it. I’m fairly sure that Elliot and Mulroney plan to mount a token opposition, then quickly fold when they’re reminded that raising taxes and giving money to provinces is a thing the Federal Government is allowed to do. This means that they’re counting on money from the carbon tax to balance their budget proposals. They can’t say this, because Ford and Granic Allen are forcing them to the right here, but I would bet that they’re privately using it to reassure fiscally conservative donors about the deficit.

Being unable to discuss what is actually the centrepiece of their financial plans leaves Elliot and Mulroney unable to give very good information about how they plan to balance the budget. They have to fall back on empty phrases like “line by line by line audit” and “efficiencies”, because anything else feels like political suicide.

This shows just how effective Granic Allen has been at being a voice for the grassroots. By staking out positions that resonate with the base, she’s forcing other leadership contestants to endorse them or risk losing to her. Note especially how she’s been extracting promises from Elliot and Mulroney whenever possible – normally around things she knows they don’t want to agree to but that play well with the base. By doing this, she hopes to remove much of their room to maneuver in the general election and prevent any big pivot to centre.

Whether this will work really depends on how costly politicians find breaking promises. Conventional wisdom holds that they aren’t particularly bothered by it. I wonder if Granic Allen’s idealism blinds her to this fact. I’m certainly sure that she wouldn’t break a promise except under the greatest duress.

On the left, it’s very common to see a view of politics that emphasizes pure and moral people. The problem with the system, says the communist, is that we let greedy people run it. If we just replaced them all with better people, we’d get a fair society. Granic Allen is certainly no communist. But she does seem to believe in the “just need good people” theory of government – and whether she wins or loses, she’s determined to bring all the other candidates with her.

This isn’t an incrementalist approach, which is why it feels so foreign to people like me. Granic Allen seems to be making the decision that she’d rather the Conservatives lose (again!) to the Liberals than that they win without a firm commitment to do things differently.

The conflict in the Ontario Conservative party ­– the conflict that was surfaced when his rivals torpedoed Patrick Brown – is around how far the party is willing to go to win. The Ontario Conservatives aren’t the first party to go through this. When UK Labour members picked Jeremy Corbyn, they clearly threw electability behind ideological purity.

In the Ontario PC party, Allen and Ford have clearly staked out a position emphasizing purity. Mulroney and Elliot have just as clearly chosen to emphasize success. Now it’s up to the members. I’m very interested to see what they decide.

Economics, Model, Quick Fix

Not Just Zoning: Housing Prices Driven By Beauty Contests

No, this isn’t a post about very pretty houses or positional goods. It’s about the type of beauty contest described by John Maynard Keynes.

Imagine a newspaper that publishes one hundred pictures of strapping young men. It asks everyone to send in the names of the five that they think are most attractive. They offer a prize: if your selection matches the five men most often appearing in everyone else’s selections, you’ll win $500.

You could just do what the newspaper asked and send in the names of those men that you think are especially good looking. But that’s not very likely to give you the win. Everyone’s tastes are different and the people you find attractive might not be very attractive to anyone else. If you’re playing the game a bit smarter, you’ll instead pick the five people that you think have the broadest appeal.

You could go even deeper and realize that many other people will be trying to win and so will also be trying to pick the most broadly appealing people. Therefore, you should pick people that you think most people will view as broadly appealing (which differs from picking broadly appealing people if you know something about what most people find attractive that isn’t widely known). This can go on indefinitely (although Yudkowsky’s Law of Ultrafinite Recursion states that “In practice, infinite recursions are at most three levels deep“, which gives me a convenient excuse to stop before this devolves into “I know you know I know that you know that…” ad infinitum).

This thought experiment was relevant to an economist because many assets work like this. Take gold: its value cannot to be fully explained by its prettiness or industrial usefulness; some of its value comes from the belief that someone else will want it in the future and be willing to pay more for it than they would a similarly useful or pretty metal. For whatever reason, we have a collective delusion that gold is especially valuable. Because this delusion is collective enough, it almost stops being a delusion. The delusion gives gold some of its value.

When it comes to houses, beauty contests are especially relevant in Toronto and Vancouver. Faced with many years of steadily rising house prices, people are willing to pay a lot for a house because they believe that they can unload it on someone else in a few years or decades for even more.

When talking about highly speculative assets (like Bitcoin), it’s easy to point out the limited intrinsic value they hold. Bitcoin is an almost pure Keynesian Beauty Contest asset, with most of its price coming from an expectation that someone else will want it at a comparable or better price in the future. Houses are obviously fairly intrinsically valuable, especially in very desirable cities. But the fact that they hold some intrinsic value cannot by itself prove that none of their value comes from beliefs about how much they can be unloaded for in the future – see again gold, which has value both as an article of commerce and as a beauty contest asset.

There’s obviously an element of self-fulfilling prophecy here, with steadily increasing house prices needed to sustain this myth. Unfortunately, the housing market seems especially vulnerable to this sort of collective mania, because the sunk cost fallacy makes many people unwilling to sell their houses at a price below what they paid for it. Any softening of the market removes sellers, which immediately drives up prices again. Only a massive liquidation event, like we saw in 2007-2009 can push enough supply into the market to make prices truly fall.

But this isn’t just a self-fulfilling prophecy. There’s deliberateness here as well. To some extent, public policy is used to guarantee that house prices continue to rise. NIMBY residents and their allies in city councils deliberately stall projects that might affect property values. Governments provide tax credits or access to tax-advantaged savings accounts for homes. In America, mortgage payments provide a tax credit!

All of these programs ultimately make housing more expensive wherever supply cannot expand to meet the artificially increased demand – which basically describes any dense urban centre. Therefore, these home buying programs fail to accomplish their goal of making house more affordable, but do serve to guarantee that housing prices will continue to go up. Ultimately, they really just represent a transfer of wealth from taxpayers generally to those specific people who own homes.

Unfortunately, programs like this are very sticky. Once people buy into the collective delusion that home prices must always go up, they’re willing to heavily leverage themselves to buy a home. Any dip in the price of homes can wipe out the value of this asset, making it worth less than the money owed on it. Since this tends to make voters very angry (and also lead to many people with no money) governments of all stripes are very motivated to avoid it.

This might imply that the smart thing is to buy into the collective notion that home prices always go up. There are so many people invested in this belief at all levels of society (banks, governments, and citizens) that it can feel like home prices are too important to fall.

Which would be entirely convincing, except, I’m pretty sure people believed that in 2007 and we all know how that ended. Unfortunately, it looks like there’s no safe answer here. Maybe the collective mania will abate and home prices will stop being buoyed ever upwards. Or maybe they won’t and the prices we currently see in Toronto and Vancouver will be reckoned cheap in twenty years.

Better zoning laws can help make houses cheaper. But it really isn’t just zoning. The beauty contest is an important aspect of the current unaffordability.

Biology, Ethics, Literature, Philosophy

Book Review: The Righteous Mind

I – Summary

The Righteous Mind follows an argument structure I learned in high school debate club. It tells you what it’s going to tell you, it tells you it, then it reminds you what it told you. This made it a really easy read and a welcome break from The Origins of Totalitarianism, the other book I’ve been reading. Practically the very first part of The Righteous Mind proper (after the foreword) is an introduction to its first metaphor.

Imagine an elephant and a rider. They have travelled together since their birth and move as one. The elephant doesn’t say much (it’s an elephant), but the rider is very vocal – for example, she’s quick to apologize and explain away any damage the elephant might do. A casual observer might think the rider is in charge, because she is so much cleverer and more talkative, but that casual observer would be wrong. The rider is the press secretary for the elephant. She explains its action, but it is much bigger and stronger than her. It’s the one who is ultimately calling the shots. Sometimes she might convince it one way or the other, but in general, she’s buffeted along by it, stuck riding wherever it goes.

She wouldn’t agree with that last part though. She doesn’t want to admit that she’s not in charge, so she hides the fact that she’s mainly a press secretary even from herself. As soon as the elephant begins to move, she is already inventing a reason why it was her idea all along.

This is how Haidt views human cognition and decision making. In common terms, the elephant is our unconscious mind and the rider our consciousness. In Kahneman’s terms, the elephant is our System 1 and the rider our System 2. We may make some decisions consciously, but many of them are made below the level of our thinking.

Haidt illustrates this with an amusing anecdote. His wife asks him why he didn’t finish some dishes he’d been doing and he immediately weaves a story of their crying baby and barking incontinent dog preventing him. Only because he had his book draft open on his computer did he realize that these were lies… or rather, a creative and overly flattering version of the truth.

The baby did indeed cry and the dog indeed bark, but neither of these prevented him from doing the dishes. The cacophany happened well before that. He’d been distracted by something else, something less sympathetic. But his rider, his “internal press secretary” immediately came up with an excuse and told it, without any conscious input or intent to deceive.

We all tell these sorts of flattering lies reflexively. They take the form of slight, harmless embellishments to make our stories more flattering or interesting, or our apologies more sympathetic.

The key insight here isn’t that we’re all compulsive liars. It’s that the “I” that we like to think exists to run our life doesn’t, really. Sometimes we make decisions, especially ones the elephant doesn’t think it can handle (high stakes apologies anyone?), but normally decisions happen before we even think about them. From the perspective of Haidt, “I”, is really “we”, the elephant and its rider. And we need to be careful to give the elephant its due, even though it’s quiet.

Haidt devotes a lot of pages to an impassioned criticism of moral rationalism, the belief that morality is best understood and attained by thinking very hard about it. He explicitly mentions that to make this more engaging, he wraps it up in his own story of entering the field of moral psychology.

He starts his journey with Kohlberg, who published a famous account of the stages of moral reasoning, stages that culminate in rationally building a model of justice. This paradigm took the world of moral psychology by storm and reinforced the view (dating in Western civilization to the times of the Greeks) that right thought had to proceed right action.

Haidt was initially enamoured with Kohlberg’s taxonomy. But reading ethnographies and doing research in other countries began to make him suspect things weren’t as simple as Kohlberg thought. Haidt and others found that moral intuitions and responses to dilemmas differed by country. In particular, WEIRD people (people from countries that were Western, Educated, Industrialized, Rich, and Developed and most especially the most educated people in those countries) were very much able to tamp down feelings of disgust in moral problems, in a way that seemed far from universal.

For example, if asked if it was wrong for a family to eat their dog if it was killed by a car (and the alternative was burying it), students would say something along the lines of “well, I wouldn’t, but it’s gross, not wrong”. Participants recruited at a nearby McDonalds gave a rather different answer: “of course it’s wrong, why are you even asking”. WEIRD students at prestigious universities may have been working towards a rational, justice-focused explanation for morality, but Haidt found no evidence that this process (or even a focus on “justice”) was as universal as Kohlberg claimed.

That’s not to say that WEIRD students had no disgust response. In fact, trying to activate it gave even more interesting results. When asked to justify answers where disgust overpowered students sense of “well as long as no one was hurt” (e.g. consensual adult sibling incest with no chance of children), Haidt observed that people would throw up a variety of weak excuses, often before they had a chance to think the problem through. When confronted by the weakness of their arguments, they’d go speechless.

This made Haidt suspect that two entirely separate processes were going on. There was a fast one for deciding and a slower another for explanation. Furthermore, the slower process was often left holding the bag for the faster one. Intuitions would provide an answer, then the subject would have to explain it, no matter how logically indefensible it was.

Haidt began to believe that Kohlberg had only keyed in on the second, slower process, “the talking of the rider” in metaphor-speak. From this point of view, Kohlberg wasn’t measuring moral sophistication. He was instead measuring how fluidly people could explain their often less than logical moral intuitions.

There were two final nails in the coffin of ethical rationalism for Haidt. First, he learned of a type of brain injury that separated people from their moral intuitions (or as the rationalists might call them “passions”). Contrary to the rationalist expectation, these people’s lives went to hell, as they alienated everyone they knew, got fired from their jobs, and in general proved the unsuitability of pure reason for making many types of decisions. This is obviously the opposite of what rationalists predicted would happen.

Second, he saw research that suggested that in practical measures (like missing library books), moral philosophers were no more moral than other philosophy professors.

Abandoning rationalism brought Haidt to a sentimentalist approach to ethics. In this view, ethics stemmed from feelings about how the world ought to be. These feelings are innate, but not immutable. Haidt describes people as “prewired”, not “hardwired”. You might be “prewired” to have a strong loyalty foundation, but a series of betrayals and let downs early in life might convince you that loyalty is just a lie, told to control idealists.

Haidt also believes that our elephants are uniquely susceptible to being convinced by other people in face to face discussion. He views the mechanism here as empathy at least as much as logic. People that we trust and respect can point out our weak arguments, with our respect for them and positive feelings towards them being the main motive force for us listening to these criticisms. The metaphor with elephants kind of breaks down here, but this does seem to better describe the world as it is, so I’ll allow it.

Because of this, Haidt would admit that rationalism does have some purpose in moral reasoning, but he thinks it is ancillary and mainly used to convince other people. I’m not sure how testable making evolutionary conclusions about this is, but it does seem plausible for there to be selection pressure to make us really good at explaining ourselves and convincing others of our point of view.

As Haidt took this into account and began to survey peoples’ moral instincts, he saw that the ways in which responses differed by country and class were actually highly repeatable and seemed to gesture at underlying categories of people. After analyzing many, many survey responses, he and his collaborators came up with five (later six) moral “modules” that people have. Each moral module looks for violations of a specific class of ethical rules.

Haidt likens these modules to our taste-buds. The six moral tastes are the central metaphor of the second section of the book.

Not everyone has these taste-buds/modules in equal proportion. Looking at commonalities among respondents, Haidt found that the WEIRDer someone was, the less likely they were to have certain modules. Conservatives tended to have all modules in a fairly equal proportion, liberals tended to be lacking three. Libertarians were lacking a whopping four, which might explain why everyone tends to believe they’re the worst.

The six moral foundations are:

Care/Harm

This is the moral foundation that makes us care about suffering and pain in others. Haidt speculates that it originally evolved in order to ensure that children (which are an enormous investment of resources for mammals and doubly so for us) got properly cared for. It was originally triggered only by the suffering or distress of our own children, but can now be triggered by anyone being hurt, as well as cute cat videos or baby seals.

An expanding set of triggers seems to be a common theme for these. I’ve personally speculated that this would perhaps be observed if the brain was wired for minimizing negative predictive error (i.e. not mistaking a scene in which there is a lion for a scene without a lion), rather than positive predictive error (i.e. not mistaking a scene without a lion for a scene with a lion). If you minimize positive predictive error, you’ll never be frightened by a shadow, but you might get eaten by a lion.

Fairness/Cheating

This is the moral foundation that makes us want everyone to do their fair share and makes us want to punish tax evaders or welfare cheats (depending on our political orientation). The evolutionary story given for this one is that it evolved to allow us to reap the benefits of two-way partnerships; it was an incentive against defecting.

Loyalty/Betrayal

This is the foundation that makes us rally around our politicians, community leaders, and sports teams, as well as the foundation that makes some people care more about people from their country than people in general. Haidt’s evolutionary explanation for this one is that it was supposed to ensure coherent groups.

Authority/Subversion

This is the moral foundation that makes people obey their boss without talking back or avoid calling their parents by the first names. It supposedly evolved to allow us to forge beneficial relationships within hierarchies. Basically, it may have once been very useful to have people believe and obey their elders without question, (like e.g. when the elders say “don’t drink that water, it’s poisoned” no one does and this story can be passed down and keep people safe, without someone having to die every few years to prove that the water is indeed poisoned).

Sanctity/Degradation

This is the moral foundation that makes people on the right leery of pre-marital sex and people on the left leery of “chemicals”. It shows up whenever we view our bodies as more than just our bodies and the world as more than just a collection of things, as well as whenever we feel that something makes us “spiritually” dirty.

The very plausible explanation for this one is that it evolved in response to the omnivore’s dilemma: how do we balance the desire for novel food sources with the risk they might poison us? We do it by avoiding anything that looks diseased or rotted. This became a moral foundation as we slowly began applying it to stuff beyond food – like other people. Historically, the sanctity moral framework was probably responsible for the despised status of lepers.

Liberty/Oppression

This moral foundation is always in tension with Authority/Subversion. It’s the foundation that makes us want to band together against and cast down anyone who is aggrandizing themselves or using their power to mistreat another.

Haidt suggests that this evolved to allow us to band together against “alpha males” and check their power. In his original surveys, it was part of Fairness/Cheating, but he found that separating it gave him much more resolving power between liberals and conservatives.

Of these six foundations, Haidt found that libertarians only had an appreciable amount of Liberty/Oppression and Fairness/Cheating and of these two, Liberty/Oppression was by far the stronger. While the other foundations did exist, they were mostly inactive and only showed up under extreme duress. For liberals, he found that they had Care/Harm, Liberty/Oppression, and Fairness/Cheating (in that order).

Conservatives in Haidt’s survey had all six moral foundations, like I said above. Care/Harm was their strongest foundation, but by having appreciable amounts of Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation, they would occasionally overrule Care/Harm in favour of one or another of these foundations.

Haidt uses these moral foundations to give an account of the “improbable” coalition between libertarians and social conservatives that closely matches the best ones to come out of political science. Basically, liberals and libertarians are descended (ideologically, if not filially) from those who embraced the enlightenment and the liberty it brought. About a hundred years ago (depending on the chronology and the country), the descendants of the enlightenment had a great schism, with some continuing to view the government as the most important threat to liberty (libertarians) and others viewing corporations as the more pressing threat (liberals). Liberals took over many auspices of the government and have been trying to use it to guarantee their version of liberty (with mixed results and many reversals) ever since.

Conservatives do not support this project of remaking society from the top down via the government. They believe that liberals want to change too many things, too quickly. Conservatives aren’t opposed to the government qua government. In fact, they’d be very congenial to a government that shared their values. But they are very hostile to a liberal, activist government (which is rightly or wrongly how conservatives view the governments of most western nations) and so team up with libertarians in the hopes of dismantling it.

This section, which characterized certain political views as stemming from “deficiencies” in certain “moral modules –, in a way that is probably hereditary – made me pause and wonder if this is a dangerous book. I’m reminded of Hannah Arendt talking about “tolerance” for Jews committing treason in The Origins of Totalitarianism.

It is an attraction to murder and treason which hides behind such perverted tolerance, for in a moment it can switch to a decision to liquidate not only all actual criminals but all who are “racially” predestined to commit certain crimes. Such changes take place whenever the legal and political machine is not separated from society so that social standards can penetrate into it and become political and legal rules. The seeming broad-mindedness that equates crime and vice, if allowed to establish its own code of law, will invariably prove more cruel and inhuman than laws, no matter how severe, which respect and recognize man’s independent responsibility for his behavior.

That said, it is possible for inconvenient or dangerous things to be true and their inconvenience or danger has no bearing on their truth. If Haidt saw his writings being used to justify or promote violence, he’d have a moral responsibility to decry the perpetrators. Accepting that sort of moral responsibility is, I believe, part of the responsibility that scientists who deal with sensitive topics must accept. I do not believe that this responsibility precludes publishing. I firmly believe that only right information can lead to right action, so I am on the whole grateful for Haidt’s taxonomy.

The similarities between liberals and libertarians extend beyond ethics. Both have more openness to experience and less of a threat response than conservatives. This explains why socially, liberals and libertarians have much more in common than liberals and conservatives.

Moral foundation theory gave me a vocabulary for some of the political writing I was doing last year. After the Conservative (Party of Canada) Leadership Convention, I talked about social conservative legislation as a way to help bind people to collective morality. I also talked about how holding other values very strongly and your values not at all can make people look diametrically opposed to you.

The third and final section of The Righteous Mind further focuses on political tribes. Its central metaphor is that humans are “90% chimp, 10% bee”. It’s central purpose is an attempt to show how humans might have been subject to group selection and how our groupishness is important to our morality.

Haidt claims that group selection is heresy in evolutionary biology (beyond hive insects). I don’t have the evolutionary biology background to say if this is true or not, although this does match how I’ve seen it talked about online among scientifically literate authors, so I’m inclined to believe him.

Haidt walks through the arguments against group selection and shows how they are largely sensible. It is indeed ridiculous to believe that genes for altruism could be preserved in most cases. Imagine a gene that would make deer more likely to sacrifice itself for the good of the herd if it seemed that was the only way to protect the herd’s young. This gene might help more deer in the herd attain adulthood, but it would also lead to any deer who had it having fewer children. There’s certainly an advantage to the herd if some members have this gene, but there’s no advantage to the carriers and a lot of advantage to every deer in the herd who doesn’t carry it. Free-riders will outcompete sacrificers and the selfless gene will get culled from the herd.

But humans aren’t deer. We can be selfish, yes, but we often aren’t and the ways we aren’t can’t be simply explained by greedy reciprocal altruism. If you’ve ever taken some time out of your day to help a lost tourist, congratulations, you’ve been altruistic without expecting anything in return. That people regularly do take time out of their days to help lost tourists suggests there might be something going on beyond reciprocal altruism.

Humans, unlike deer, have the resources and ability to punish free riders. We expect everyone to pitch in and might exile anyone who doesn’t. When humans began to form larger and larger societies, it makes sense that the societies who could better coordinate selfless behaviour would do better than those that couldn’t. And this isn’t just in terms of military cohesion (as the evolutionary biologist Lesley Newson had to point out to Haidt). A whole bunch of little selfless acts ­– sharing food, babysitting, teaching – can make a society more efficient than its neighbours at “turning resources into offspring”.

A human within the framework of society is much more capable than a human outside of it. I am only able to write this and share it widely because a whole bunch of people did the grunt work of making the laptop I’m typing it on, growing the food I eat, maintaining our communication lines, etc. If I was stuck with only my own resources, I’d be carving this into the sand (or more likely, already eaten by wolves).

Therefore, it isn’t unreasonable to expect that the more successful and interdependent a society could become, the more it would be able to outcompete, whether directly or indirectly its nearby rivals and so increase the proportion of its conditionally selfless genes in the human gene pool.

Conditional selflessness is a better description of the sorts of altruism we see in humans. It’s not purely reciprocal as Dawkins might claim, but it isn’t boundless either. It’s mostly reserved for people we view as similar to us. This doesn’t need to mean racially or religiously. In my experience, a bond as simple as doing the same sport is enough to get people to readily volunteer their time for projects like digging out and repairing a cracked foundation.

The switch from selfishness to selflessly helping out our teams is called “the hive switch” by Haidt. He devotes a lot of time to exploring how we can flip it and the benefits of flipping it. I agree with him that many of the happiest and most profound moments of anyone’s life come when the switch has been activated and they’re working as part of a team.

The last few chapters are an exploration of how individualism can undermine the hive switch and several mistakes liberals make in their zeal to overturn all hierarchies. Haidt believes that societies have both social capital (the bounds of trust between people) and moral capital (the society’s ability to bind people to collective values) and worries that liberal individualism can undermine these to the point where people will be overall worse off. I’ll talk more about moral capital later in the review.

II – On Shaky Foundations

Anyone who reads The Righteous Mind might quickly realize that I left a lot of the book out of my review. There was a whole bunch of supporting evidence about how liberals and conservatives “really are” or how they differ that I have deliberately omitted.

You may have heard that psychology is currently in the midst of a “replication crisis“. Much (I’d crudely estimate somewhere between 25% and 50%) of the supporting evidence in this book has been a victim of this crisis.

Here’s what the summary of Chapter 3 looks like with the offending evidence removed:

Pictured: Page 82 of my edition of The Righteous Mind, after some “minor” corrections. Text is © 2012 Jonathon Haidt. Used here for purposes of commentary and criticism.

 

Here’s an incomplete list of claims that didn’t replicate:

  • IAT tests show that we can have unconscious prejudices that affect how we make social and political judgements (1, 2, 3 critiques/failed replications). Used to buttress the elephant/rider theory of moral decisions.
  • Disgusting smells can make us more judgemental (failed replication source). Used as evidence that moral reasoning can be explained sometimes by external factors, is much less rational than we’d like to believe.
  • Babies prefer a nice puppet over a mean one, even when pre-verbal and probably lacking the context to understand what is going on (failed replication source). Used as further proof for how we are “prewired” for certain moral instincts.
  • People from Asian societies are better able to do relative geometry and less able to absolute geometry than westerners (failed replication source). This was used to make the individualistic morality of westerners seem inherent.
  • The “Lady Macbeth Effect” showed a strong relationship between physical and moral feelings of “cleanliness” (failed replication source). Used to further strengthen the elephant/rider analogy.

The proper attitude with which to view psychology studies these days is extreme scepticism. There are a series of bad incentives (it’s harder and less prestigious to publish negative findings, publishing is necessary to advance in your career) that have led to scientists in psychology (and other fields) to inadvertently and advertently publish false results. In any field in which you expect true discoveries to be rare (and I think “interesting and counter-intuitive things about the human brain fits that bill), you shouldn’t allow any individual study to influence you very much. For a full breakdown of how this can happen even when scientists check for statistical significance, I recommend reading “Why Most Published Research Findings Are False” (Ioannidis 2005).

Moral foundations theory appears to have escaped the replication crisis mostly unscathed, (as have Tverskey and Kahneman’s work on heuristics, something that made me more comfortable including the elephant/rider analogy). I think this is because moral foundations theory is primarily a descriptive theory. It grew out of a large volume of survey responses and represents clusters in those responses. It makes little in the way of concrete predictions about the world. It’s possible to quibble with the way Haidt and his collaborators drew the category boundaries. But given the sheer volume of responses they received – and the fact that they based their results not just on WEIRD individuals – it’s hard to disbelieve that they haven’t come up with a reasonable clustering of the possibility space of human values.

I will say that stripped of much of its ancillary evidence, Haidt’s attack on rationalism lost a lot of its lustre. It’s one thing to believe morality is mostly unconscious when you think that washing your hands or smelling trash can change how moral you act. It’s quite another when you know those studies were fatally flawed. The replication crisis fueled my inability to truly believe Haidt’s critique of rationality. This disbelief in turn became one of the two driving forces in my reaction to this book.

Haidt’s moral relativism around patriarchal cultures was the other.

III – Less and Less WEIRD

It’s good that Haidt looked at a variety of cultures. This is a thing few psychologists do. There’s historically been an alarming tendency to run studies on western undergraduate students, then declare “this is how people are”. This would be fine if western undergraduates were representative of people more generally, but I think that assumption was on shaky foundations even before moral foundation theory showed that morally, at least, it was entirely false.

Haidt even did some of this field work himself. He visited South America and India to run studies. In fact, he mentioned that this field work was one of the key things that made him question the validity of western individualistic morality and wary of morality that didn’t include the sanctity, loyalty, and authority foundations.

His willingness to get outside of his bubble and to learn from others is laudable.

But.

There is one key way in which Haidt never left his bubble, a way which makes me inherently suspicious of all of his defences of the sanctity, authority, and loyalty moral foundations. Here’s him recounting his trip to India. Can you spot the fatal omission?

I was told to be stricter with my servants, and to stop thanking them for serving me. I watched people bathe in and cook with visibly polluted water that was held to be sacred. In short, I was immersed in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine.

It only took a few weeks for my dissonance to disappear, not because I was a natural anthropologist but because the normal human capacity for empathy kicked in. I liked these people who were hosting me, helping me, and teaching me. Wherever I went, people were kind to me. And when you’re grateful to people, it’s easier to adopt their perspective. My elephant leaned toward them, which made my rider search for moral arguments in their defense. Rather than automatically rejecting the men as sexist oppressors and pitying the women, children, and servants as helpless victims, I began to see a moral world in which families, not individuals, are the basic unit of society, and the members of each extended family (including its servants) are intensely interdependent. In this world, equality and personal autonomy were not sacred values. Honoring elders, gods, and guests, protecting subordinates, and fulfilling one’s role-based duties were more important.

Haidt tried out other moral systems, sure, but he tried them out from the top. Lois McMaster Bujold once had a character quip: “egalitarians adjust to aristocracies just fine, as long as they get to be the aristocrats”. I would suggest that liberals likewise find the authority framework all fine and dandy, as long as they have the authority.

Would Haidt have been able to find anything worth salvaging in the authority framework if he’d instead been a female researcher, who found herself ignored, denigrated, and sexually harassed on her research trip abroad?

It’s frustrating when Haidt is lecturing liberals on their “deficient” moral framework while simultaneously failing to grapple with the fact that he is remarkably privileged. “Can’t you see how this other society knows some moral truths [like men holding authority over woman] that we’ve lost” is much less convincing when the author of the sentence stands to lose absolutely nothing in the bargain. It’s easy to lecture others on the hard sacrifices society “must” make – and far harder to look for sacrifices that will mainly affect you personally.

It is in this regard that I found myself wondering if this might have been a more interesting book if it had been written by a woman. If the hypothetical female author were to defend the authority framework, she’d actually have to defend it, instead of hand-waving the defence with a request that we respect and understand all ethical frameworks. And if this hypothetical author found it indefensible, we would have been treated to an exploration of what to do if one of our fundamental ethical frameworks was flawed and had to be discarded. That would be an interesting conversation!

Not only that, but perhaps a female author would have given more pages to the observation that woman and children’s role in societal altruism was just as important as that of men (as child-rearing is a more reliable way to demonstrate and cash-in on groupishness than battle) have been fully explored, instead of relegated to a brief note at the end of the chapter on group selection. This perspective is genuinely new to me and I wanted to see it developed further.

Ultimately, Haidt’s defences of Authority/Subversion, Loyalty/Betrayal, and Sanctity/Degradation fell flat in the face of my Care/Harm and Liberty/Oppression focused moral compass. Scott Alexander once wrote about the need for “a solution to the time-limitedness of enlightenment that works from within the temporal perspective”. By the same token, I think Haidt fails to deliver a defence of conservatism or anything it stands for that works from within the liberal Care/Harm perspective. Insofar as his book was meant to bridge inferential gaps and political divides, this makes it a failure.

That’s a shame, because arguments that bridge this divide do exist. I’ve read some of them.

IV – What if Liberals are Wrong?

There is a principle called “Chesterton’s Fence”, which comes from the famed Catholic conservative and author G.K. Chesterton. It goes like this: if you see a fence blocking the road and cannot see the reason for it to be there, should you remove it? Chesterton said
“no!”, resoundingly. He suggested you should first understand the purpose of the fence. Only then may you safely remove it.

There is a strain of careful conservatism that holds Chesterton’s fence as its dearest parable. Haidt makes brief mention of this strain of thought, but doesn’t expound on it successfully. I think it is this thought and this thought only that can offer Care/Harm focused liberals like myself a window into the redeeming features of the conservative moral frameworks.

Here’s what the argument looks like:

Many years ago, western nations had a unified moral framework. This framework supported people towards making long term decisions and acting in a pro-social manner. There are many people who want to act differently than they would if left to their own devices and this framework helped them to do that.

Liberals began to dismantle this system in the sixties. They saw hierarchies and people being unable to do the things they wanted to do, so tried to take down the whole edifice without first checking if any of it was doing anything important.

This strand of conservatism would argue that it was. They point to the increasing number of children born to parents who aren’t married (although increasingly these parents aren’t teens, which is pretty great), increasing crime (although this has started to fall after we took lead out of gasoline), increasing atomisation, decreasing church attendance, and increasing rates of anxiety and depression (although it is unclear how much of this is just people feeling more comfortable getting treatment).

Here’s the thing. All of these trends affect well educated and well-off liberals the least. We’re safe from crime in good neighbourhoods. We overwhelming wait until stable partnerships to have children. We can afford therapists and pills to help us with any mental health issues we might have; rehab to help us kick any drug habits we pick up.

Throwing off the old moral matrix has been an unalloyed good for privilege white liberals. We get to have our cake and eat it too – we have fun, take risks, but know that we have a safety net waiting to catch us should we fall.

The conservative appeal to tradition points out that our good time might be at the expense of the poor. It asks us if our hedonistic pleasures are worth a complete breakdown in stability for people with fewer advantages that us. It asks us consider sacrificing some of these pleasures so that they might be better off. I know many liberals who might find the sacrifice of some of their freedom to be a moral necessity, if framed this way.

But even here, social conservatism has the seeds of its own undoing. I can agree that children do best when brought up by loving and committed parents who give them a lot of stability (moving around in childhood is inarguably bad for many kids). Given this, the social conservative opposition to gay marriage (despite all evidence that it doesn’t mess kids up) is baffling. The sensible positon would have been “how can we use this to make marriage cool again“, not “how long can we delay this”.

This is a running pattern with social conservatism. It conserves blindly, without giving thought to what is even worth preserving. If liberals have some things wrong, that doesn’t automatically mean that the opposite is correct. It’s disturbingly easy for people on both sides of an issue to be wrong.

I’m sure Haidt would point out that this is why we have the other frameworks. But because of who I am, I’m personally much more inclined to do things in the other direction – throw out most of the past, then re-implement whatever we find to be useful but now lacking.

V – What if Liberals Listened?

In Berkeley, California, its environs, and assorted corners of the Internet, there exists a community that calls themselves “Rationalists”. This moniker is despite the fact that they agree with Haidt as to the futility of rationalism. Epistemically, they tend to be empiricists. Ethically, non-cognitivist utilitarians. Because they are largely Americans, they tend to be politically disengaged, but if you held them at gunpoint and demanded they give you a political affiliation, they would probably either say “liberal” or “libertarian”.

The rationalist community has semi-public events that mimic many of the best parts of religious events, normally based around the solstices (although I also attended a secular Seder when I visited last year).

This secular simulacrum of a religion has been enough to fascinate at least one Catholic.

The rationalist community has managed to do the sort of thing Haidt despaired of: create a strong community with communal morality in a secular, non-authoritarian framework. There are communal norms (although they aren’t very normal; polyamory and vegetarianism or veganism are very common). People tend to think very hard before having children and take care ensuring that any children they have will have a good extended support structure. People live in group houses, which combats atomisation.

This is also a community that is very generous. Many of the early adherents of Effective Altruism were drawn from the rationalist community. It’s likely that rationalists donate to charity in amounts more similar to Mormons than atheists (with the added benefit of almost all of this money going to saving lives, rather than proselytizing).

No community is perfect. This is a community made up of people. It has its fair share of foibles and megalomanias, bad actors and jerks. But it represents something of a counterpoint to Haidt’s arguments about the “deficiency” of a limited framework morality.

Furthermore, its altruism isn’t limited in scope, the way Haidt believes all communal altruism must necessarily be. Rationalists encourage each other to give to causes like malaria eradication (which mainly helps people in Africa), or AI risk (which mainly helps future people). Because there are few cost effective local opportunities to do good (for North Americans), this global focus allows for more lives to be saved or improved per dollar spent.

This is all of it, I think, the natural result of thoughtful people throwing away most cultural traditions and vestiges of traditionalist morality, then seeing what breaks and fixing those things in particular. It’s an example of what I wished for at the end of the last section applied to the real world.

VI – Is or Ought?

I hate to bring up the Hegelian dialectic, but I feel like this book fits neatly into it. We had the thesis: “morality stems from rationality” that was so popular in western political thought. Now we have the antithesis: “morality and rationality are separate horses, with rationality subordinate – and this is right and proper”.

I can’t wait for someone other than Haidt to a write a synthesis; a view that rejects rationalism as the basis of human morality but grapples with the fact that we yearn for perfection.

Haidt, in the words of Joseph Heath, thinks that moral discourse is “essentially confabulatory”, consisting only of made up stories that justify our moral impulses. There may be many ways in which this is true, but it doesn’t account for the fact that some people read Peter Singer’s essay “Famine, Affluence, and Morality” and go donate much of their money to the global poor. It doesn’t account for all those who have listened to the Sermon on the Mount and then abandoned their possessions to live a monastic life.

I don’t care whether you believe in The Absolute, or God, or Allah, or The Cycle of Rebirth, or the World Soul, or The Truth, or nothing at all. You probably have felt that very human yearning to be better. To do better. You’ve probably believed that there is a Good and it can perhaps be comprehended and reached. Maybe this is the last vestiges of my atrophied sanctity foundation talking, but there’s something base about believing that morality is solely a happy accident of how we evolved.

The is/ought fallacy occurs when we take what “is” and decide it is what “ought” to be. If you observe that murder is part of the natural order and conclude that it is therefore moral, you have committed this fallacy.

Haidt has observed the instincts that build towards human morality. His contributions to this field have helped make many things clear and make many conflicts more understandable. But in deciding that these natural tastes are the be-all and end-all of human morality, by putting them ahead of reason, religion, and every philosophical tradition, he has committed this fundamental error.

At the start of the Righteous Mind, Haidt approvingly mentions those scientists who once thought that ethics could be taken away from philosophers and studied instead only by them.

But science can only ever tell us what is, never what ought to be. As a book about science, The Righteous Mind is a success. But as a work on ethics, as an expression of how we ought to behave, it is an abysmal failure.

In this area, the philosophers deserve to keep their monopoly a little longer.

Economics, Politics, Quick Fix

Cities Are Weird And Minimum Wages Can Help

[6-minute read]

I don’t understand why people choose to go bankrupt living the most expensive cities, but I’m increasingly viewing this as a market failure and collective action problem to be fixed with intervention, not a failure of individual judgement.

There are many cities, like Brantford, Waterloo, or even Ottawa, where everything works properly. Rent isn’t really more expensive than suburban or rural areas. There’s public transit, which means you don’t necessarily need a car, if you choose where you live with enough care. There are plenty of jobs. Stuff happens.

But cities like Toronto, Vancouver, and San Francisco confuse the hell out of me. The cost of living is through the roof, but wages don’t even come close to following (the difference in salary between Toronto and Waterloo for someone with my qualifications is $5,000, which in no way would cover the yearly difference in living expenses). This is odd when talking about well-off tech workers, but becomes heartbreaking when talking about low-wage workers.

Toronto Skyline
Not pictured: Selling your organs to afford a one-bedroom condo. Image Credit: Abi K on Flickr

If people were perfectly rational and only cared about money (the mythical homo economicus), fewer people would move to cities, which would bid up wages (to increase the supply of workers) or drive down prices (because fewer people would be competing for the same apartments), which would make cities more affordable. But people do care about things other than money and the network effects of cities are hard to beat (put simply: the bigger the city, the more options for a not-boring life you have). So, people move – in droves – to the most expensive and dynamic cities and wages don’t go up (because the supply of workers never falls) and the cost of living does (because the number of people competing for housing does) and low wage workers get ground up.

It’s not that I don’t understand the network effects. It’s that I don’t understand why people get ground up instead of moving.

But the purpose of good economics is to deal with people as they are, not as they can be most conveniently modeled. And given this, I’ve begun to think about high minimum wages in cities as an intervention that fixes a market failure and collective action problem.

That is to say: people are bad at reading the market signal that they shouldn’t move to cities that they can’t afford. It’s the signal that’s supposed to say here be scarce goods, you might get screwed, but the siren song of cities seems to overpower it. This is a market failure in the technical sense because there exists a distribution of goods that could make people (economically) better off (fewer people living in big cities) without making anyone worse off (e.g. they could move to communities that are experiencing chronic shortages of labour and be basically guaranteed jobs that would pay the bills) that the market cannot seem to fix.

(That’s not to say that this is all the fault of the market. Restrictive zoning makes housing expensive and rent control makes it scarce.)

It’s a collective action problem because if everyone could credibly threaten to move, then they wouldn’t have to; the threat would be enough to increase wages. Unfortunately, everyone knows that anyone who leaves the city will be quickly replaced. Everyone would be better off if they could coordinate and make all potential movers promise not to move in until wages increase, but there’s no benefit to being the first person to leave or the first person to avoid moving [1] and there currently seems to be no good way for everyone to coordinate in making a threat.

When faced with the steady grinding down of young people, low wage workers, and everyone “just waiting for their big break“, we have two choices. We can do tut-tut at their inability to be “rational” (aka leave their friends, family, jobs, and aspirations to move somewhere else [2]), or we can try to better their situation.

If everyone was acting “rationally”, wages would be bid up. But we can accomplish the same thing by simple fiat. Governments can set a minimum wage or offer wage subsidies, after all.

I do genuinely worry that in some places, large increases in the minimum wage will lead to unemployment (we’ll figure out whether this is true over the next decade or so). I’m certainly worried that a minimum wage pegged to inflation will lead to massive problems the next time we have a recession [3].

So, I think we should fix zoning, certainly. And I think we need to fix how Ontario’s minimum wage functions in a recession so that it doesn’t destroy our whole economy during the next one. But at the same time, I think we need to explore differential minimum wages for our largest cities and the rest of the province/country. I mean this even in a world where the current minimum $14/hour wage isn’t rolled back. Would even $15/hour cut it in Toronto and Vancouver [4]?

If we can’t make a minimum wage work without increased unemployment, then maybe we’ll have to turn to wage subsidies. This is actually the method that “conservative” economist Scott Sumner favours [5].

What’s clear to me is that what we’re currently doing isn’t working.

I do believe in a right to shelter. Like anyone who shares this belief, I understand that “shelter” is a broad word, encompassing everything from a tarp to a mansion. Where a certain housing situation falls on this spectrum is the source of many a debate. Writing this is a repudiation of my earlier view, that living in an especially desirable city was a luxury not dissimilar from a mansion.

A couple of things changed my mind. First, I paid more attention to the experiences of my friends who might be priced out of the cities they grew up in and have grown to love. Second, I read the Ecomodernist Manifesto, with its calls for densification as the solution to environmental degradation and climate change. Densification cannot happen if many people are priced out of cities, which means figuring this out is actually existentially important.

The final piece of the puzzle was the mental shift whereby I started to view wages in cities – especially for low-wage earners – as a collective action problem and a market failure. As anyone on the centre-left can tell you, it’s the government’s job to fix those – ideally in a redistributive way.

Footnotes

[1] This is inductive up to the point where you have a critical mass; there’s no benefit until you’re the nth + 1 person, where n is the number of people necessary to create a scarcity of workers sufficient to begin bidding up wages. And all of the people who moved will see little benefit for their hassle, unless they’re willing to move back. ^

[2] For us nomadic North Americans, this can be confusing: “The gospel of ‘just pick up and leave’ is extremely foreign to your typical European — be they Serbian, French or Irish. Ditto with a Sudanese, Afghan or Japanese national. In Israel, it’s the kind of suggestion that ruins dinner parties… We non-indigenous love to move. We don’t just see it as just good economic policy, but as a virtue. We glorify the immigrant, we hug them at the airport when they arrive and we inherently mistrust anyone who dares to pine for what they left behind”. ^

[3] Basically, wages should fall in a recession, but they largely don’t, which means inflation is necessary to get wages back to a level where employment can recover; pegging the minimum wage to inflation means this can’t happen. Worse, if the rest of the country were to adopt sane monetary policy during the next bad recession, Ontario’s minimum wage could rise to the point where it would swallow large swathes of the economy. This would really confuse price signals and make some work economically unviable (to do in Ontario; it would surely still be done elsewhere). ^

[4] I think we may have to subsidize some new construction or portion of monthly rent so that all increased wages don’t get ploughed into to increased rents. If you have more money chasing the same number of rental units and everything else remains constant, you’ll see all gains in wages erased by increases in rents. Rent control is a very imperfect solution, because it changes new construction into units that can be bought outright, at market rates. This helps people who have saved up a lot of money outside of the city and what to move there, but is very bad for the people living there, grappling with rent so high that they can’t afford to save up a down payment. ^

[5] No seriously, this is what passes for conservative among economists these days; while we all stopped looking, they all became utilitarians who want to help impoverished people as much as possible. ^

Economics, Model

Against Job Lotteries

In simple economic theory, wages are supposed to act as signals. When wages increase in a sector, it should signal people that there’s lots of work to do there, incentivizing training that will be useful for that field, or causing people to change careers. On the flip side, when wages decrease, we should see a movement out of that sector.

This is all well and good. It explains why the United States has seen (over the past 45 years) little movement in the number of linguistics degrees, a precipitous falloff in library sciences degrees, some decrease in English degrees, and a large increase in engineering and business degrees [1].

This might be the engineer in me, but I find things that are working properly boring. What I’m really interested in is when wage signals break down and are replaced by a job lottery.

Job lotteries exist whenever there are two tiers to a career. On one hand, you’ll have people making poverty wages and enduring horrendous conditions. On the other, you’ll see people with cushy wages, good job security, and (comparatively) reasonable hours. Job lotteries exist in the “junior doctor” system of the United Kingdom, in the academic system of most western countries, and teaching in Ontario (up until very recently). There’s probably a much less extreme version of this going on even in STEM jobs (in that many people go in thinking they’ll work for Google or the next big unicorn and end up building websites for the local chamber of commerce or writing internal tools for the company billing department [2]). A slightly different type of job lottery exists in industries where fame plays a big role: writing, acting, music, video games, and other creative endeavours.

Job lotteries are bad for two reasons. Compassionately, it’s really hard to see idealistic, bright, talented people endure terribly conditions all in the hope of something better, something that might never materialize. Economically, it’s bad when people spend a lot of time unemployed or underemployed because they’re hopeful they might someday get their dream job. Both of these reasons argue for us to do everything we can to dismantle job lotteries.

I do want to make a distinction between the first type of job lottery (doctors in the UK, professor, teachers), which is a property of how institutions have happened to evolve, and the second, which seems much more inherent to human nature. “I’ll just go with what I enjoy” is a very common media strategy that will tend to split artists (of all sorts) into a handful of mega-stars, a small group of people making a modest living, and a vast mass of hopefuls searching for their break. To fix this would require careful consideration and the building of many new institutions – projects I think we lack the political will and the know-how for.

The problems in the job market for professors, doctors, or teachers feel different. These professions don’t rely on tastemakers and network effects. There’s also no stark difference in skills that would imply discontinuous compensation. This doesn’t imply that skills are flat – just that they exist on a steady spectrum, which should imply that pay could reasonably follow a similar smooth distribution. In short, in all of these fields, we see problems that could be solved by tweaks to existing institutions.

I think institutional change is probably necessary because these job lotteries present a perfect storm of misdirection to our primate brains. That is to say (1) People are really bad at probability and (2) the price level for the highest earners suggests that lots of people should be entering the industry. Combined, this means that people will be fixated on the highest earners, without really understanding how unlikely that is to be them.

Two heuristics drive our inability to reason about probabilities: the representativeness heuristic (ignoring base rates and information about reliability in favour of what feels “representative”) and the availability heuristic (events that are easier to imagine or recall feel more likely). The combination of these heuristics means that people are uniquely sensitive to accounts of the luckiest members of a profession (especially if this is the social image the profession projects) and unable to correctly predict their own chances of reaching that desired outcome (because they can imagine how they will successfully persevere and make everything come out well).

Right now, you’re probably laughing to yourself, convinced that you would never make a mistake like this. Well let’s try an example.

Imagine a scenario is which only ten percent of current Ph. D students will get tenure (basically true). Now Ph. D students are quite bright and are incredibly aware of their long odds. Let’s say that if a student three years into a program makes a guess as to whether or not they’ll get a tenure track job offer, they’re correct 80% of the time. If a student tells you they think they’ll get a tenure track job offer, how likely do you think it is that they will? Stop reading right now and make a guess.

Seriously, make a guess.

This won’t work if you don’t try.

Okay, you can keep reading.

It is not 80%. It’s not even 50%. It’s 31%. This is probably best illustrated visually.

Craft Design Online has inadvertently created a great probability visualization tool.

 

There are four things that can happen here (I’m going to conflate tenure track job offers with tenure out of a desire to stop typing “tenure track job offers”).

Ten students will get tenure. Of these ten, eight (0.8 x 10) will correctly believe they will get it (1/green) and two (10 – 0.8 x 10) will incorrectly believe they won’t (2/yellow). Ninety students won’t get tenure. Of these 90, 18 (90 – 0.8 x 90) will incorrectly believe they will get tenure (3/orange) and 72 (0.8 x 90) will correctly believe they won’t get tenure (4/red). Twenty-six students, those coloured green (1) and orange (3) believe they’ll get tenure. But we know that only eight of them really will – which works out to just below the 31% I gave above.

Almost no one can do this kind of reasoning, especially if they aren’t primed for a trick. The stories we build in our head about the future feel so solid that we ignore the base rate. We think that we’ll know if we’re going to make it. And even worse, we think that a feeling of “knowing” if we’ll make it provides good information. We think that relatively accurate predictors provide useful information against a small chance. They clearly don’t. When the base rate is small (here 10%), the base rate is the single greatest predictor of your chances.

But this situation doesn’t even require small chances for us to make mistakes. Imagine you had two choices: a career that leaves you feeling fulfilled 100% of the time, but is so competitive that you only have an 80% chance of getting into it (assume in the other 20%, you either starve or work a soul-crushing fast food job with negative fulfillment) or a career where you are 100% likely to get a job, but will only find it fulfilling 80% of the time.

Unless that last 20% of fulfillment is strongly super-linear [3][4], or you don’t have any value at all on eating/avoiding McDrugery, it is better to take the guaranteed career. But many people looking at this probably rounded 80% to 100% – another known flaw in human reasoning. You can very easily have a job lottery even when the majority of people in a career are in the “better” tier of the job, because many entrants to the field will view “majority” as all and stick with it when they end up shafted.

Now, you might believe that these problems aren’t very serious, or that surely people making a decision as big as a college major or career would correct for them. But these fallacies date to the 70s! Many people still haven’t heard of them. And the studies that first identified them found them to be pretty much universal. Look, the CIA couldn’t even get people to do probability right. You think the average job seeker can? You think you can? Make a bunch of predictions for the next year and then talk with me when you know how calibrated (or uncalibrated) you are.

If we could believe that people would become better at probabilities, we could assume that job lotteries would take care of automatically. But I think it is clear that we cannot rely on that, so we must try and dismantle them directly. Unfortunately, there’s a reason many are this way; many of them have come about because current workers have stacked the deck in their own favour. This is really great for them, but really bad for the next group of people entering the workforce. I can’t help but believe that some of the instability faced by millennials is a consequence of past generations entrenching their benefits at our expense [5]. Others have come about because of poorly planned policies, bad enrolment caps, etc.

These cover the two ways we can deal with a job lottery, we can limit the supply indirectly (by making the job, or the perception of the job once you’ve “made it” worse), or limit the supply directly (by changing the credentials necessary of the job, or implementing other training caps)   . In many of the examples of job lotteries I’ve found, limiting the supply directly might be a very effective way to deal with the problem.

I can make this claim because limiting supply directly has worked in the real world. Faced with a chronic 33% oversupply of teachers and soaring unemployment rates among teaching graduates, Ontario chose to cut in half the number of slots in teacher’s college and double the length of teacher’s college programs. No doubt this was annoying for the colleges, which made good money off of those largely doomed extraneous pupils, but it did lead to the end of the oversupply of teachers and a tighter job market for teachers and this was probably better for the economy compared to the counterfactual.

Why? Because having people who’ve completed four years of university do an extra year or two of schooling only to wait around and hope for a job is a real drag. They could be doing something productive with that time! The advantage of increasing gatekeeping around a job lottery and increasing it as early as possible is that you force people to go find something productive to do. It is much better for an economy to have hopeful proto-teachers who would in fact be professional resume submitters go into insurance, or real estate, or tutoring, or anything at all productive and commensurate with their education and skills.

There’s a cost here, of course. When you’re gatekeeping (for e.g. teacher’s college or medical school), you’re going to be working with lossy proxies for the thing you actually care about, which is performance in the eventual job. The lossier the proxy, the more you are needlessly depressing the quality of people who are allowed to do the job – which is a serious concern when you’re dealing with heart surgery ­– or the people providing foundational education to your next generation.

You can also find some cases where increasing selectiveness in an early stage doesn’t successfully force failed applicants to stop wasting their time and get on with their life. I was very briefly enrolled in a Ph. D program for biomedical engineering a few years back. Several professors I interviewed with while considering graduate school wanted to make sure I had no aspirations on medical school – because they were tired of their graduate students abandoning research as soon as their Ph. D was complete. For these students who didn’t make it into medical school after undergrad, a Ph. D was a ticket to another shot at getting in [6]. Anecdotally, I’ve seen people who fail to get into medical school or optometry get a master’s degree, then try again.

Banning extra education before medical school cuts against the idea that people should be able to better themselves, or persevere to get to their dreams. It would be institutionally difficult. But I think that it would, in this case, probably be a net good.

There are other fields where limiting supply is rather harmful. Graduate students are very necessary for science. If we punitively limited their number, we might find a lot of valuable scientific progress falling to a stand-still. We could try and replace graduate students with a class of professional scientific assistants, but as long as the lottery for professorship is so appealing (for those who are successful), I bet we’d see a strong preference for Ph. D programs over professional assistantships.

These costs sometimes make it worth it to go right to the source of the job lottery, the salaries and benefits of people already employed [7]. Of course, this has its own downsides. In the case of doctors, high salaries and benefits are useful for making really clever applicants choose to go into medicine rather than engineering and law. For other jobs, there’s the problems of practicality and fairness.

First, it is very hard to get people to agree to wage or benefit cuts and it almost always results in lower morale – even if you have “sound macro-economic reasons” for it. In addition, many jobs with lotteries have them because of union action, not government action. There is no czar here to change everything. Second, people who got into those careers made those decisions based on the information they had at the time. It feels weird to say “we want people to behave more rationally in the job market, so by fiat we will change the salaries and benefits of people already there.” The economy sometimes accomplishes that on its own, but I do think that one of the roles of political economics is to decrease the capriciousness of the world, not increase it.

We can of course change the salaries and benefits only for new employees. But this somewhat confuses the signalling (for a long time, people will still have principle examples of the profession come from the earlier cohort). It also rarely alleviates a job lottery, because in practice people set this up for new employees to have reduced salaries and benefits for a time. Once they get seniority, they’ll expect to enjoy all the perks of seniority.

Adjunct professorships feel like a failed attempt to remove the job lottery for full professorships. Unfortunately, they’ve only worsened it, by giving people a toe-hold that makes them feel like they might someday claw their way up to full professorship. I feel that when it comes to professors, the only tenable thing to do is greatly reduce salaries (making them closer to the salary progression of mechanical engineers, rather than doctors), hire far more professors, cap graduate students wherever there is high under- and un- employment, and have more professional assistants who do short 2-year college courses. Of course, this is easy to say and much harder to do.

If these problems feel intractable and all the solutions feel like they have significant downsides, welcome to the pernicious world of job lotteries. When I thought of writing about them, coming up with solutions felt like by far the hardest part. There’s a complicated trade-off between proportionality, fairness, and freedom here.

Old fashioned economic theory held that the freer people were, the better off they would be. I think modern economists increasingly believe this is false. Is a world in which people are free to get very expensive training ­– despite very long odds for a job and cognitive biases that make understanding just how punishing the odds are – expensive training, in short, that they’d in expectation be better off without, a better one than a world where they can’t?

I increasingly believe that it isn’t. And I increasingly believe that having rough encounters with reality early on and having smooth salary gradients is important to prevent this world. Of course, this is easy for me to say. I’ve been very deliberate taking my skin out of job lotteries. I dropped out of graduate school. I write often and would like to someday make money off of writing, but I viscerally understand the odds of that happening, so I’ve been very careful to have a day job that I’m happy with [8].

If you’re someone who has made the opposite trade, I’m very interested in hearing from you. What experiences do you have that I’m missing that allowed you to make that leap of faith?

Footnotes:

[1] I should mention that there’s a difference between economic value, normative/moral value, and social value and I am only talking about economic value here. I wouldn’t be writing a blog post if I didn’t think writing was important. I wouldn’t be learning French if I didn’t think learning other languages is a worthwhile endeavour. And I love libraries.

And yes, I know there are many career opportunities for people holding those degrees and no I don’t think they’re useless. I simply think a long-term shift in labour market trends have made them relatively less attractive to people who view a degree as a path to prosperity. ^

[2] That’s not to knock these jobs. I found my time building internal tools for an insurance company to be actually quite enjoyable. But it isn’t the fame and fortune that some bright-eyed kids go into computer science seeking. ^

[3] That is to say, that you enjoy each additional percentage of fulfillment at a multiple (greater than one) of the previous one. ^

[4] This almost certainly isn’t true, given that the marginal happiness curve for basically everything is logarithmic (it’s certainly true for money and I would be very surprised if it wasn’t true for everything else); people may enjoy a 20% fulfilling career twice as much as a 10% fulfilling career, but they’ll probably enjoy a 90% fulfilling career very slightly more than an 80% fulfilling career. ^

[5] It’s obvious that all of this applies especially to unions, which typically fight for seniority to matter quite a bit when it comes to job security and pay and do whatever they can to bid up wages, even if that hurts hiring. This is why young Canadians end up supporting unions in theory but avoiding them in practice. ^

[6] I really hope that this doesn’t catch on. If an increasing number of applicants to medical school already have graduate degrees, it will be increasingly hard for those with “merely” an undergraduate degree to get in to medical school. Suddenly we’ll be requiring students to do 11 years of potentially useless training, just so that they can start the multi-year training to be a doctor. This sort of arms race is the epitome of wasted time.

In many European countries, you can enter medical school right out of high school and this seems like the obviously correct thing to do vis a vis minimizing wasted time. ^

[7] The behaviour of Uber drivers shows job lotteries on a small scale. As Uber driver salaries rise, more people join and all drivers spend more time waiting around, doing nothing. In the long run (here meaning eight weeks), an increase in per-trip costs leads to no change whatsoever in take home pay.

The taxi medallion system that Uber has largely supplanted prevented this. It moved the job lottery one step further back, with getting the medallion becoming the primary hurdle, forcing those who couldn’t get one to go work elsewhere, but allowing taxi drivers to largely avoid dead times.

Uber could restrict supply, but it doesn’t want to and its customers certainly don’t want it to. Uber’s chronic driver oversupply (relative to a counterfactual where drivers waited around very little) is what allows it to react quickly during peak hours and ensure there’s always an Uber relatively close to where anyone would want to be picked up. ^

[8] I do think that I would currently be a much better writer if I’d instead tried to transition immediately to writing, rather than finding a career and writing on the side. Having a substantial safety net removes almost all of the urgency that I’d imagine I’d have if I was trying to live on (my non-existent) writing income.

There’s a flip side here too. I’ve spent all of zero minutes trying to monetize this blog or worrying about SEO, because I’m not interested in that and I have no need to. I also spend zero time fretting over popularizing anything I write (again, I don’t enjoy this). Having a security net makes this something I do largely for myself, which makes it entirely fun. ^