Last week I explained how poor decisions by central bankers (specifically failing to spur inflation) can make recessions much worse and lead to slower wage growth during recovery.
(Briefly: inflation during recessions reduces the real cost of payroll, cutting business expenses and making firing people unnecessary. During a recovery, it makes hiring new workers cheaper and so leads to more being hired. Because central bankers failed to create inflation during and after the great recession, many businesses are scared of raising salaries. They believe (correctly) that this will increase their payroll expenses to the point where they’ll have to lay many people off if another recession strikes. Until memories of the last recession fade or central bankers clean up their act, we shouldn’t expect wages to rise.)
Now I’d like to expand on an offhand comment I made about the minimum wage last week and explore how it can affect recovery, especially if it’s indexed to inflation.
The minimum wage represents a special case when it comes to pay cuts and layoffs in recessions. While it’s always theoretically possible to convince people to take a pay cut rather than a layoff (although in practice it’s mostly impossible), this option isn’t available for people who make the minimum wage. It’s illegal to pay them anything less. If bad times strike and business is imperiled, people making the minimum wage might have to be laid off.
I say “might”, because when central bankers aren’t proving useless, inflation can rescue people making the minimum wage from being let go. Inflation makes the minimum wage relatively less valuable, which reduces the cost of payroll relative to other inputs and helps to save jobs that pay minimum wage. This should sound familiar, because inflation helps people making the minimum wage in the exact same way it helps everyone else.
Because of increasingly expensive housing and persistently slow wage growth, some jurisdictions are experimenting with indexing the minimum wage to inflation. This means that the minimum wage rises at the same rate as the cost of living. Most notably (to me, at least), this group includes my home province of Ontario.
When the minimum wage is tied to inflation, recessions can become especially dangerous and drawn out.
With the minimum wage rising in lockstep with inflation, any attempts to decrease payroll costs in real terms (that is to say: inflation adjusted terms) is futile to the extent that payroll expenses are for minimum wage workers. Worse, people who were previously making above the minimum wage and might have had their jobs saved by inflation can be swept up by an increasingly high minimum wage.
This puts central bankers in a bind. As soon as the minimum wage is indexed to inflation, inflation is no longer a boon to all workers. Suddenly, many workers can find themselves in a “damned if you do, damned if you don’t” situation. Without inflation, they may be too expensive to keep. With it, they may be saved… until the minimum wage comes for them too. If a recession goes on long enough, only high-income workers would be sparred.
In addition, minimum wage (or near-minimum wage) workers who are laid off during a period of higher inflation (an in this scenario, there will be many) will suffer comparatively more, as their savings get exhausted even more quickly.
Navigating these competing needs would be an especially tough challenge for certain central banks like the US Federal Reserve – those banks that have dual mandates to maintain stable prices and full employment. If a significant portion of the US ever indexes its minimum wage to inflation, the Fed will have no good options.
It is perhaps darkly humorous that central banks, which bear an unusually large parcel of the blame for our current slow wage growth, stand to face the greatest challenges from the policies we’re devising to make up for their past shortcomings. Unfortunately, I think a punishment of this sort is rather like cutting off our collective nose to spite our collective face.
There are simple policies we could enact to counter the risks here. Suspending any peg to inflation during years that contain recessions (in Ontario at least, the minimum wage increase due to inflation is calculated annually) would be a promising start. Wage growth after a recession could be ensured with a rebound clause, or better yet, the central bank actually doing its job properly.
I am worried about the political chances (and popularity once enacted) of any such pragmatic policy though. Many people respond to recessions with the belief that the government can make things better by passing the right legislation – forcing the economy back on track by sheer force of ink. This is rarely the case, especially because the legislation that people have historically clamoured for when unemployment is high is the sort that increases wages, not lowers them. This is a disaster when unemployment threatens because of too-high wages. FDR is remembered positively for his policy of increasing wages during the great depression, even though this disastrous decision strangled the recovery in its crib. I don’t expect any higher degree of economic literacy from people today.
To put my fears more plainly, I worry that politicians, faced with waning popularity and a nipping recession, would find allowing the minimum wage to be frozen too much of a political risk. I frankly don’t trust most politicians to follow through with a freeze, even if it’s direly needed.
Minimum wages are one example of a tradeoff we make between broad access and minimum standards. Do we try and make sure everyone who wants a job can have one, or do we make sure people who have jobs aren’t paid too little for their labour, even if that hurts the unemployed? As long as there’s scarcity, we’re going to have to struggle with how we ensure that as many people as possible have their material needs met and that involves tradeoffs like this one.
But when we’re making these kind of compassionate decisions, we need to look at the risks of whatever systems we choose. Proponents of indexing the minimum wage to inflation haven’t done a good job of understanding the grave risk it poses to the health of our economy and perhaps most of all, to the very people they seek to help. In places like Ontario, where the minimum wage is already indexed to inflation, we’re going to pay for their lack of foresight next time an economic disaster strikes.
The Economist wonders why wage growth isn’t increasing, even as unemployment falls. A naïve reading of supply and demand suggests that it should, so this has become a relatively common talking point in the news, with people of all persuasions scratching their heads. The Economist does it better than most. They at least talk about slowing productivity growth and rising oil prices, instead of blaming everything on workers (for failing to negotiate) or employers (for not suddenly raising wages).
But after reading monetary policyblogs, the current lack of wage growth feels much less confusing to me. Based on this, I’d like to offer one explanation for why wages haven’t been growing. While I may not be an economist, I’ll be doing my best to pass along verbatim the views of serious economic thinkers.
When people talk about stagnant wage growth, this is what they mean. Average weekly wages have increased from $335 a week in 1979 to $350/week in 2018 (all values are 1982 CPI-adjusted US dollars). This is a 4.5% increase, representing $780/year more (1982 dollars) in wages over the whole period. This is not a big change.
More recent wage growth also isn’t impressive. At the depth of the recession, weekly wages were $331 . Since then, they’ve increased by $19/week, or 5.7%. However, wages have only increased by $5/week (1.4%) since the previous high in 2009.
This doesn’t really match people’s long run expectations. Between 1948 and 1973, hourly compensation increased by 91.3%.
I don’t have an explanation for what happened to once-high wage growth between 1980 and 2008 (see The Captured Economy for what some economists think might explain it). But when it comes to the current stagnation, one factor I don’t hear enough people talking about is bad policy moves by central bankers.
To understand why the central bank affects wage growth, you have to understand something called “sticky wages“.
Wages are considered “sticky” because it is basically impossible to cut them. If companies face a choice between firing people and cutting wages, they’ll almost always choose to fire people. This is because long practice has taught them that the opposite is untenable.
If you cut everyone’s wages, you’ll face an office full of much less motivated people. Those whose skills are still in demand will quickly jump ship to companies that compensate them more in line with market rates. If you just cut the wages of some of your employees (to protect your best performers), you’ll quickly find an environment of toxic resentment sets in.
This is not even to mention that minimum wage laws make it illegal to cut the wages of many workers.
Normally the economy gets around sticky wages with inflation. This steadily erodes wages (including the minimum wage). During boom times, businesses increase wages above inflation to keep their employees happy (or lose them to other businesses that can pay more and need the labour). During busts, inflation can obviate the need to fire people by decreasing the cost of payroll relative to other inputs.
But what we saw during the last recession was persistently low inflation rates. Throughout the whole the thing, the Federal Reserve Bank kept saying, in effect, “wow, really hard to up inflation; we just can’t manage to do it”.
It’s obviously false that the Fed couldn’t trigger inflation if it wanted to. As a thought experiment, imagine that they had printed enough money to give everyone in the country $1,000,000 and then mailed it out. That would obviously cause inflation. So it is (theoretically) just a manner of scaling that back to the point where we’d only see inflation, not hyper-inflation. Why then did the Fed fail to do something that should be so easy?
According to Scott Sumner, you can’t just look at the traditional instrument the central bank has for managing inflation (the interest rate) to determine if its policies are inflationary or not. If something happens to the monetary supply (e.g. say all banks get spooked and up their reserves dramatically ), this changes how effective those tools will be.
After the recession, the Fed held the interest rates low and printed money. But it actually didn’t print enough money given the tightened bank reserves to spur inflation. What looked like easy money (inflationary behaviour) was actually tight money (deflationary behaviour), because there was another event constricting the money supply. If the Fed wanted inflation, it would have had to do much more than is required in normal times. The Federal Reserve never realized this, so it was always confused by why inflation failed to materialize.
This set off the perfect storm that led to the long recovery after the recession. Inflation didn’t drive down wages, so it didn’t make economic sense to hire people (or even keep as many people on staff), so aggregate demand was low, so business was bad, so it didn’t make sense to hire people (or keep them on staff)…
If real wages had properly fallen, then fewer people would have been laid off, business wouldn’t have gotten as bad, and the economy could have started to recover much more quickly (with inflation then cooling down and wage growth occurring). Scott Sumner goes so far to say that the money shock caused by increased cash reserves may have been the cause of the great recession, not the banks failing or the housing bubble.
What does this history have to do with poor wage growth?
Well it turns out that companies have responded to the tight labour market with something other than higher wages: bonuses.
Bonuses are one-time payments that people only expect when times are good. There’s no problem cutting them in recessions.
Switching to bonuses was a calculated move for businesses, because they have lost all faith that the Federal Reserve will do what is necessary (or will know how to do what is necessary) to create the inflation needed to prevent deep recessions. When you know that wages are sticky and you know that inflation won’t save you from them, you have no choice but to pre-emptively limit wages, even when there isn’t a recession. Even when a recession feels fairly far away.
More inflation may feel like the exact opposite of what’s needed to increase wages. But we’re talking about targeted inflation here. If we could trust humans to do the rational thing and bargain for less pay now in exchange for more pay in the future whenever times are tight, then we wouldn’t have this problem and wages probably would have recovered better. But humans are humans, not automatons, so we need to make the best with what we have.
One of the purposes of institutions is to build a framework within which we can make good decisions. From this point of view, the Federal Reserve (and other central banks; the Bank of Japan is arguably far worse) have failed. Institutions failing when confronted with new circumstances isn’t as pithy as “it’s all the fault of those greedy capitalists” or “people need to grow backbones and negotiate for higher wages”, but I think it’s ultimately a more correct explanation for our current period of slow wage growth. This suggests that we’ll only see wage growth recover when the Fed commits to better monetary policy , or enough time passes that everyone forgets the great recession.
In either case, I’m not holding my breath.
 I’m ignoring the drop in Q2 2014, where wages fell to $330/week, because this was caused by the end of extended unemployment insurance in America. The end of that program made finding work somewhat more important for a variety of people, which led to an uptick in the supply of labour and a corresponding decrease in the market clearing wage. ^
 Under a fractional reserve banking system, banks can lend out most of their deposits, with only a fraction kept in reserve to cover any withdrawals customers may want to make. This effectively increases the money supply, because you can have dollars (or yen, or pesos) that are both left in a bank account and invested in the economy. When banks hold onto more of their reserves because of uncertainty, they are essentially shrinking the total money supply. ^
 Scott Sumner suggests that we should target nominal GDP instead of inflation. When economic growth slows, we’d automatically get higher inflation, as the central bank pumps out money to meet the growth target. When the market begins to give way to roaring growth and speculative bubbles, the high rate of real growth would cause the central bank to step back, tapping the brakes before the economy overheats. I wonder if limiting inflation on the upswing would also have the advantage of increasing real wages as the economy booms? ^
When dealing with questions of inequality, I often get boggled by the sheer size of the numbers. People aren’t very good at intuitively parsing the difference between a million and a billion. Our brains round both to “very large”. I’m actually in a position where I get reminded of this fairly often, as the difference can become stark when programming. Running a program on a million points of data takes scant seconds. Running the same set of operations on a billion data points can take more than an hour. A million seconds is eleven and a half days. A billion seconds 31 years.
Here I would like to try to give a sense of the relative scale of various concepts in inequality. Just how much wealth do the wealthiest people in the world possess compared to the rest? How much of the world’s middle class is concentrated in just a few wealthy nations? How long might it take developing nations to catch up with developed nations? How long before there exists enough wealth in the world that everyone could be rich if we just distributed it more fairly?
According to the Forbes billionaire list, there are (as of the time of writing) 2,208 billionaires in the world, who collectively control $9.1 trillion in wealth (1 trillion seconds ago was the year 29691 BCE, contemporaneous with the oldest cave paintings in Europe). This is 3.25% of the total global wealth of $280 trillion.
The US Federal Budget for 2019 is $4.4 trillion. State governments and local governments each spend another $1.9 trillion. Some $700 billion dollars is given to those governments by the Federal government. With that subtracted, total US government spending is projected to be $7.5 trillion next year.
Therefore, the whole world population of billionaires holds assets equivalent to 1.2 years of US government outlays. Note that US government outlays aren’t equivalent to that money being destroyed. It goes to pay salaries or buy equipment. The comparison here is simply to illustrate how private wealth stacks up against the budgets that governments control.
If we go down by a factor of 1000, there are about 15 million millionaires in the world (according to Wikipedia). Millionaires collectively hold $37.1 trillion (13.25% of all global wealth). All of the wealth that millionaires hold would be enough to fund US government spending for five years.
When we see sensational headlines, like “Richest 1% now owns half the world’s wealth“, we tend to think that we’re talking about millionaires and billionaires. In fact, millionaires and billionaires only own about 16.5% of the world’s wealth (which is still a lot for 0.2% of the world’s population to hold). The rest is owned by less wealthy individuals. The global 1% makes $32,400 a year or more. This is virtually identical to the median American yearly salary. This means that almost fully half of Americans are in the global 1%. Canadians now have a similar median wage, which means a similar number are in the global 1%.
To give a sense of how this distorts the global middle class, I used Povcal.net, the World Bank’s online tool for poverty measurement. I looked for the percentage of a country’s population making between 75% and 125% of the median US income (at purchasing power parity, which takes into account cheaper goods and services in developing countries), equivalent to $64-$107US per day (which is what you get when you divide 75% and 125% of the median US wage by 365 – as far as I can tell, this is the procedure that gives us numbers like $1.25 per day income as the threshold for absolute poverty).
I grabbed what I thought would be an interesting set of countries: The G8, BRICS, The Next 11, Australia, Botswana, Chile, Spain, and Ukraine. These 28 countries had – in the years surveyed – a combined population of 5.3 billion people and had among them the 17 largest economies in the world (in nominal terms). You can see my spreadsheet collecting this data here.
The United States had by far the largest estimated middle class (73 million people), followed by Germany (17 million), Japan (12 million), France (12 million), and the United Kingdom (10 million). Canada came next with 8 million, beating most larger countries, including Brazil, Italy, Korea, Spain, Russia, China, and India. Iran and Mexico have largely similar middle-class sizes, despite Mexico being substantially larger. Botswana ended up having a larger middle class than the Ukraine.
This speaks to a couple of problems when looking at inequality. First, living standards (and therefore class distinctions) are incredibly variable from country to country. A standard of living that is considered middle class in North America might not be the same in Europe or Japan. In fact, I’ve frequently heard it said that the North American middle class (particularly Americans and Canadians) consume more than their equivalents in Europe. Therefore, this should be looked at as a comparison of North American equivalent middle class – who, as I’ve already said, are about 50% encompassed in the global 1%.
Second, we tend to think of countries in Europe as generally wealthier than countries in Africa. This isn’t necessarily true. Botswana’s GDP per capita is actually three times larger than Ukraine’s when unadjusted and more than twice as large at purchasing power parity (which takes into account price differences between countries). It also has a higher GDP per capita than Serbia, Albania, and Moldova (even at purchasing power parity). Botswana, Seychelles, and Gabon have per capita GDPs at purchasing power parity that aren’t dissimilar from those possessed by some less developed European countries.
Botswana, Gabon, and Seychelles have all been distinguished by relatively high rates of growth since decolonization, which has by now made them “middle income” countries. Botswana’s growth has been so powerful and sustained that in my spreadsheet, it has a marginally larger North American equivalent middle class than Nigeria, a country approximately 80 times larger than it.
Of all the listed countries, Canada had the largest middle class as a percent of its population. This no doubt comes partially from using North American middle-class standards (and perhaps also because of the omission of the small, homogenous Nordic countries), although it is also notable that Canada has the highest median income of major countries (although this might be tied with the United States) and the highest 40th percentile income. America dominates income for people in the 60th percentile and above, while Norway comes out ahead for people in the 30th percentile or below.
The total population of the (North American equivalent) middle class in these 28 countries was 170 million, which represents about 3% of their combined population.
There is a staggering difference in consumption between wealthy countries and poor countries, in part driven by the staggering difference in the size of middle (and higher classes) – people with income to spend on things beyond immediate survival. According to Trading Economics, the total disposable income of China is $7.84 trillion (dollars are US). India has $2.53 trillion. Canada, with a population almost 40 times smaller than either, has a total disposable income of $0.96 trillion, while America, with a population about four times smaller than either China or India has a disposable income of $14.79 trillion, larger than China and India put together. If China was as wealthy as Canada, its yearly disposable income would be almost $300 trillion, approximately equivalent to the total amount of wealth in the world.
According to Wikipedia, The Central African Republic has the world’s lowest GDP per capita at purchasing power parity, making it a good candidate for the title of “world’s poorest country”. Using Povcal, I was able to estimate the median wage at $1.33 per day (or $485 US per year). If the Central African Republic grew at the same rate as Botswana did post-independence (approximately 8% year on year) starting in 2008 (the last year for which I had data) and these gains were seen in the median wage, it would take until 2139 for it to attain the same median wage as the US currently enjoys. This of course ignores development aid, which could speed up the process.
All of the wealth currently in the world is equivalent to $36,000 per person (although this is misleading, because much of the world’s wealth is illiquid – it’s in houses and factories and cars). All of the wealth currently on the TSX is equivalent to about $60,000 per Canadian. All of the wealth currently on the NYSE is equivalent to about $65,000 per American. In just corporate shares alone, Canada and the US are almost twice as wealthy as the global average. This doesn’t even get into the cars, houses, and other resources that people own in those countries.
If total global wealth were to grow at the same rate as the market, we might expect to have approximately $1,000,000 per person (not inflation adjusted) sometime between 2066 and 2072, depending on population growth. If we factor in inflation and want there to be approximately $1,000,000 per person in present dollars, it will instead take until sometime between 2102 and 2111.
This assumes too much, of course. But it gives you a sense of how much we have right now and how long it will take to have – as some people incorrectly believe we already do – enough that everyone could (in a fair world) have so much they might never need to work.
This is not of course, to say, that things are fair today. It remains true that the median Canadian or American makes more money every year than 99% of the world, and that the wealth possessed by those median Canadians or Americans and those above them is equivalent to that held by the bottom 50% of the world. Many of us, very many of those reading this perhaps, are the 1%.
The Cambridge Analytica scandal has put tech companies front and centre. If the thinkpieces along the lines of “are the big tech companies good or bad for society” were coming out any faster, I might have to doubt even Google’s ability to make sense of them all.
This isn’t another one of those thinkpieces. Instead it’s an attempt at an analysis. I want to understand in monetary terms how much one tech company – Google – puts into or takes out of everyone’s pockets. This analysis is going to act as a template for some of the more detailed analyses of inequality I’d like to do later, so if you have a comment about methodology, I’m eager to hear it.
Here’s the basics: Google is a large technology company that primarily makes money off of ad revenues. Since Google is a publicly traded company, statistics are easy to come by. In 2016, Google brought in $89.5 billion in revenue and about 89% of that was from advertising. Advertising is further broken down between advertising on Google sites (e.g. Google Search, Gmail, YouTube, Google Maps, etc.) which account for 80% of advertising revenue and advertising on partner sites, which covers the remainder. The remaining 11% is made up of a variety of smaller projects – selling corporate licenses of its GSuite office software, the Google Play Store, the Google Cloud Computing Platform, and several smaller projects.
There are two ways that we can track how Google’s existence helps or hurts you financially. First, there’s the value of the software it provides. Google’s search has become so important to our daily life that we don’t even notice it anymore – it’s like breathing. Then there’s YouTube, which has more high-quality content than anyone could watch in a lifetime. There’s Google Docs, which are almost a full (free!) replacement for Microsoft Office. There’s Gmail, which is how basically everyone I know does their email. And there’s Android, currently the only viable alternative to iOS. If you had to pay for all of this stuff, how much would you be out?
Second, we can look at how its advertising arm has changed the prices of everything we buy. If Google’s advertising system has driven an increase in spending on advertising (perhaps by starting an arms race in advertising, or by arming marketing managers with graphs, charts and metrics that they can use to trigger increased spending), then we’re all ultimately paying for Google’s software with higher prices elsewhere (we could also be paying with worse products at the same prices, as advertising takes budget that would otherwise be used on quality). On the other hand, if more targeted advertising has led to less advertising overall, then everything will be slightly less expensive (or higher quality) than the counterfactual world in which more was spent on advertising.
Once we add this all up, we’ll have some sort of answer. We’ll know if Google has made us better off, made us poorer, or if it’s been neutral. This doesn’t speak to any social benefits that Google may provide (if they exist – and one should hope they do exist if Google isn’t helping us out financially).
To estimate the value of the software Google provides, we should compare it to the most popular paid alternatives – and look into the existence of any other good free alternatives. Because of this, we can’t really evaluate Search, but because of its existence, let’s agree to break any tie in favour of Google helping us.
On the other hand, Google docs is very easy to compare with other consumer alternatives. Microsoft Office Home Edition costs $109 yearly. Word Perfect (not that anyone uses it anymore) is $259.99 (all prices should be assumed to be in Canadian dollars unless otherwise noted).
Free alternatives exist in the form of OpenOffice and LibreOffice, but both tend to suffer from bugs. Last time I tried to make a presentation in OpenOffice I found it crashed approximately once per slide. I had a similar experience with LibreOffice. I once installed it for a friend who was looking to save money and promptly found myself fixing problems with it whenever I visited his house.
My crude estimate is that I’d expect to spend four hours troubleshooting either free alternative per year. Weighing this time at Ontario’s minimum wage of $14/hour and accepting that the only office suite that anyone under 70 ever actually buys is Microsoft’s offering and we see that Google saves you $109 per year compared to Microsoft and $56 each year compared to using free software.
With respect to email, there are numerous free alternatives to Gmail (like Microsoft’s Hotmail). In addition, many internet service providers bundle free email addresses in with their service. Taking all this into account, Gmail probably doesn’t provide much in the way of direct monetary value to consumers, compared to its competitors.
Google Maps is in a similar position. There are several alternatives that are also free, like Apple Maps, Waze (also owned by Google), Bing Maps, and even the Open Street Map project. Even if you believe that Google Maps provides more value than these alternatives, it’s hard to quantify it. What’s clear is that Google Maps isn’t so far ahead of the pack that there’s no point to using anything else. The prevalence of Google Maps might even be because of user laziness (or anticompetitive behaviour by Google). I’m not confident it’s better than everything else, because I’ve rarely used anything else.
Android is the last Google project worth analyzing and it’s an interesting one. On one hand, it looks like Apple phones tend to cost more than comparable Android phones. On the other hand, Apple is a luxury brand and it’s hard to tell how much of the added price you pay for an iPhone is attributable to that, to differing software, or to differing hardware. Comparing a few recent phones, there’s something like a $50-$200 gap between flagship Android phones and iPhones of the same generation. I’m going to assign a plausible sounding $20 cost saved per phone from using Android, then multiply this by the US Android market share (53%), to get $11 for the average consumer. The error bars are obviously rather large on this calculation.
(There may also be second order effects from increased competition here; the presence of Android could force Apple to develop more features or lower its prices slightly. This is very hard to calculate, so I’m not going to try to.)
When we add this up, we see that Google Docs save anyone who does word processing $50-$100 per year and Android saves the average phone buyer $11 approximately every two years. This means the average person probably sees some slight yearly financial benefit from Google, although I’m not sure the median person does. The median person and the average person do both get some benefit from Google Search, so there’s something in the plus column here, even if it’s hard to quantify.
Now, on to advertising.
I’ve managed to find an assortment of sources that give a view of total advertising spending in the United States over time, as well as changes in the GDP and inflation. I’ve compiled it all in a spreadsheet with the sources listed at the bottom. Don’t just take my word for it – you can see the data yourself. Overlapping this, I’ve found data for Google’s revenue during its meteoric rise – from $19 million in 2001 to $110 billion in 2017.
Google ad revenue represented 0.03% of US advertising spending in 2002. By 2012, a mere 10 years later, it was equivalent to 14.7% of the total. Over that same time, overall advertising spending increased from $237 billion in 2002 to $297 billion in 2012 (2012 is the last date I have data for total advertising spending). Note however that this isn’t a true comparison, because some Google revenue comes from outside of America. I wasn’t able to find revenue broken down in greater depth that this, so I’m using these numbers in an illustrative manner, not an exact manner.
So, does this mean that Google’s growth drove a growth in advertising spending? Probably not. As the economy is normally growing and changing, the absolute amount of advertising spending is less important than advertising spending compared to the rest of the economy. Here we actually see the opposite of what a naïve reading of the numbers would suggest. Advertising spending grew more slowly than economic growth from 2002 to 2012. In 2002, it was 2.3% of the US economy. By 2012, it was 1.9%.
This also isn’t evidence that Google (and other targeted advertising platforms have decreased spending on advertising). Historically, advertising has represented between 1.2% of US GDP (in 1944, with the Second World War dominating the economy) and 3.0% (in 1922, during the “roaring 20s”). Since 1972, the total has been more stable, varying between 1.7% and 2.5%. A Student’s T-test confirms (P-values around 0.35 for 1919-2002 vs. 2003-2012 and 1972-2002 vs. 2003-2012) that there’s no significant difference between post-Google levels of spending and historical levels.
Even if this was lower than historical bounds, it wouldn’t necessarily prove Google (and its ilk) are causing reduced ad spending. It could be that trends would have driven advertising spending even lower, absent Google’s rise. All we can for sure is that Google hasn’t caused an ahistorically large change in advertising rates. In fact, the only thing that is clear in the advertising trends is the peak in the early 1920s that has never been recaptured and a uniquely low dip in the 1940s that seems to have obviously been caused by World War II. For all that people talk about tech disrupting advertising and ad-supported businesses, these current changes are still less drastic than changes we’ve seen in the past.
The change in advertising spending during the years Google is growing could be driven by Google and similar advertising services. But it also could be normal year to year variation, driven by trends similar to what have driven it in the past. If I had a Ph. D. in advertising history, I might be able to tell you what those trends are, but from my present position, all I can say is that the current movement doesn’t seem that weird, from a historical perspective.
In summary, it looks like the expected value for the average person from Google products is close to $0, but leaning towards positive. It’s likely to be positive for you personally if you need a word processor or use Android phones, but the error bounds on advertising mean that it’s hard to tell. Furthermore, we can confidently say that the current disruption in the advertising space is probably less severe than the historical disruption to the field during World War II. There’s also a chance that more targeted advertising has led to less advertising spending (and this does feel more likely than it leading to more spending), but the historical variations in data are large enough that we can’t say for sure.
Under the Partial Test Ban Treaty (PTBT), all nuclear tests except for those underground are banned. Under the Non-Proliferation Treaty (NPT), only the permanent members of the UN Security Council are legally allowed to possess nuclear weapons. Given the public outcry over fallout that led to the PTBT and the worries over widespread nuclear proliferation that led to the NPT, it’s clear that we require something beyond pinky promises to verify that countries are meeting the terms of these treaties.
But how do we do so? How can you tell when a country tests an atomic bomb? How can you tell who did it? And how can one differentiate a bomb on the surface from a bomb in the atmosphere from a bomb in space from a bomb underwater from a bomb underground?
I’m going to focus on two efforts to monitor nuclear weapons: the national security apparatus of the United States and the Comprehensive Test Ban Treaty Organization (CTBTO) Preparatory Commission’s International Monitoring System (IMS). Monitoring falls into five categories: Atmospheric Radionuclide Monitoring, Seismic Monitoring, Space-based Monitoring, Hydroacoustic Monitoring, and Infrasound Monitoring.
Atmospheric Radionuclide Monitoring
Nuclear explosions generate radionuclides, either by dispersing unreacted fuel, as direct products of fission, or by interactions between neutrons and particles in the air or ground. These radionuclides are widely dispersed from any surface testing, while only a few fission products (mainly various radionuclides of the noble gas xenon) can escape from properly conducted underground tests.
For the purposes of minimizing fallout, underground tests are obviously preferred. But because they only emit small amounts of one particular radionuclide, they are much harder for radionuclide monitoring to detect.
Detecting physical particles is relatively easy. There are 80 IMS stations scattered around the world. Each is equipped with an air intake and a filter. Every day, the filter is changed and then prepared for analysis. Analysis involves waiting a day (for irrelevant radionuclides to decay), then reading decay events from the filter for a further day. This gives scientists an idea of what radioactive elements are present.
Any deviations from the baseline at a certain station can be indicative of a nuclear weapon test, a nuclear accident, or changing wind patterns bringing known radionuclides (e.g. from a commercial reactor) to a station where they normally aren’t present. Wind analysis and cross validation with other methods are used to corroborate any suspicious events.
Half of the IMS stations are set up to do the more difficult xenon monitoring. Here air is pumped through a material with a reasonably high affinity for xenon. Apparently activated charcoal will work, but more sophisticated alternatives are being developed. The material is then induced to release the xenon (with activated charcoal, this is accomplished via heating). This process is repeated several times, with the output of each step pumped to a fresh piece of activated charcoal. Multiple cycles ensure that only relatively pure xenon get through to analysis.
Once xenon is collected, isotope analysis must be done to determine which (if any) radionuclides of xenon are present. This is accomplished either by comparing the beta decay of the captured xenon with its gamma decay, or looking directly at gamma decay with very precise gamma ray measuring devices. Each isotope of xenon has a unique half-life (which affects the frequency with which it omits beta- and gamma-rays) and a unique method of decay (which determines if the decay products are primarily alpha-, beta-, or gamma-rays). Comparing the observed decay events to these “fingerprints” allows for the relative abundance of xenon nuclides to be estimated.
There are some background xenon radionuclides from nuclear reactors and even more from medical isotope production (where we create unstable nuclides in nuclear reactors for use in medical procedures). Looking at global background data you can see the medical isotope production in Ontario, Europe, Argentina, Australia and South Africa. I wonder if this background effect makes world powers cautious about new medical isotope production facilities in countries that are at risk of pursuing nuclear weapons. Could Iran’s planned medical isotope complex have been used to mask nuclear tests?
Not content merely to host several monitoring stations and be party to the data of the whole global network of IMS stations, the United States also has the WC-135 “Constant Phoenix” plane, a Boeing C-135 equipped with mobile versions of particulate and xenon detectors. The two WC-135s can be scrambled anywhere a nuclear explosion is suspected to look for evidence. A WC-135 gave us the first confirmation that the blast from the 2006 North Korean nuclear test was indeed nuclear, several days before the IMS station in Yellowknife, Canada confirmed a spike in radioactive xenon and wind modelling pinpointed the probable location as inside North Korea.
Given that fewer monitoring stations are equipped with xenon radionuclide detectors and that the background “noise” from isotope production can make radioactive xenon from nuclear tests hard to positively identify, it might seem like nuclear tests are easy to hide underground.
That isn’t the case.
A global network of seismometers ensures that any underground nuclear explosion is promptly detected. These are the same seismometers that organizations like the USGS (United States Geological Survey) use to detect and pinpoint earthquakes. In fact, the USGS provides some of the 120 auxiliary stations that the CTBTO can call on to supplement its fifty seismic monitoring stations.
Seismometers are always on, looking for seismic disturbances. Substantial underground nuclear tests produce shockwaves that are well within the detection limit of modern seismometers. The sub-kiloton North Korean nuclear test in 2006 appears to have been registered as equivalent to a magnitude 4.1 earthquake. A quick survey of ongoing earthquakes should probably show you dozens that have been detected that are less powerful than even that small North Korean test.
This probably leads you to the same question I found myself asking, namely: “if earthquakes are so common and these detectors are so sensitive, how can they ever tell nuclear detonations from earthquakes?”
It turns out that underground nuclear explosions might rattle seismometers like earthquakes do, but they do so with characteristics very different from most earthquakes.
First, the waveform is different. Imagine you’re holding a slinky and a friend is holding the other end. There are two mains ways you can create waves. The first is by shaking it from side to side or up and down. Either way, there’s a perspective from which these waves will look like the letter “s”.
The second type of wave can be made by moving your arm forward and backwards, like you’re throwing and catching a ball. These waves will cause moving regions where the slinky is bunched more tightly together and other regions where it is more loosely packed.
These are analogous to the two main types of body waves in seismology. The first (the s-shaped one) is called an S-wave (although the “S” here stands for “shear” or “secondary” and only indicates the shape by coincidence), while the second is called a P-wave (for “pressure” or “primary”).
Earthquakes normally have a mix of P-waves and S-waves, as well as surface waves created by interference between the two. This is because earthquakes are caused by slipping tectonic plates. This slipping gives some lateral motion to the resulting waves. Nuclear explosions lack this side to side motion. The single, sharp impact from them on the surrounding rocks is equivalent to the wave you’d get if you thrust your arm forward while holding a slinky. It’s almost all P-wave and almost no S-wave. This is very distinctive against a background of earthquakes. The CTBTO is kind enough to show what this difference looks like; in this image, the top event is a nuclear test and the bottom event is an earthquake of a similar magnitude in a similar location (I apologize for making you click through to see the image, but I don’t host copyrighted images here).
There’s one further way that the waves from nuclear explosions stand out. They’re caused by a single point source, rather than kilometers of rock. This means that when many seismic stations work together to find the cause of a particular wave, they’re actually able to pinpoint the source of any explosion, rather than finding a broad front like they would for an earthquake.
The fifty IMS stations automatically provide a continuous stream of data to the CTBTO, which sifts through this data for any events that are overwhelmingly P-Waves and have a point source. Further confirmation then comes from the 120 auxiliary stations, which provide data on request. Various national and university seismometer programs get in on this too (probably because it’s good for public relations and therefore helps to justify their budgets), which is why it’s not uncommon to see several estimates of yield soon after seismographs pick up on nuclear tests.
Space Based Monitoring
This is the only type of monitoring that isn’t done by the CTBTO Preparatory Commission, which means that it is handled by state actors – whose interests necessarily veer more towards intelligence gathering than monitoring treaty obligations per se.
The United States began its space based monitoring program in response to the Limited Test Ban Treaty, which left verification explicitly to the major parties involved. The CTBTO Preparatory Commission was actually formed in response to a different treaty, the Comprehensive Test Ban Treaty, which is not fully in force yet (hence why the organization ensuring compliance with it is called the “Preparatory Commission”).
The United States first fulfilled its verification obligations with the Vela satellites, which were equipped with gamma-ray detectors, x-ray detectors, electromagnetic pulse detectors (which can detect the electro-magnetic pulse from high-altitude nuclear detonations) and an optical sensor called a bhangmeter.
Bhangmeters (the name is a reference to a strain of marijuana, with the implied subtext that you’d have to be high to believe they would work) are composed of a photodiode (a device that produces current when illuminated), a timer, and some filtering components. Bhangmeters are set up to look for the distinctive nuclear “double flash“, caused when the air compressed in a nuclear blast briefly obscuring the central fireball.
The bigger a nuclear explosion, the larger the compression and the longer the central fireball is obscured. The timer picks up on this, estimating nuclear yield from the delay between the initial light and its return.
The bhangmeter works because very few natural (or human) phenomena produce flashes that are as bright or distinctive as nuclear detonations. A properly calibrated bhangmeter will filter out continuous phenomena like lightning (or will find them too faint to detect). Other very bright events, like comets breaking up in the upper atmosphere, only provide a single flash.
There’s only been one possible false positive since the bhangmeters went live in 1967; a double flash was detected in the Southern Indian Ocean, but repeated sorties by the WC-135s detected no radionuclides. The event has never been conclusively proved to be nuclear or non-nuclear in origin and remains one of the great unsolved mysteries of age of widespread atomic testing.
By the time of this (possible) false positive, the bhangmeters had also detected 41 genuine nuclear tests.
The Vela satellites are no longer in service, but the key technology they carried (bhangmeters, x-ray detectors, and EMP detectors) lives on in the US GPS satellite constellation, which does double duty as its space-based nuclear sentinels.
One last note of historical errata: when looking into unexplained gamma-ray readings produced by the Vela satellites, US scientists discovered gamma-ray bursts, an energetic astronomical phenomenon associated with supernovas and merging binary stars.
Undersea explosions don’t have a double flash, because steam and turbulence quickly obscure the central fireball and don’t clear until well after the fireball has subsided. It’s true that radionuclide detection should eventually turn up evidence of any undersea nuclear tests, but it’s still useful to have a more immediate detection mechanism. That’s where hydroacoustic monitoring comes in.
There are actually two types of hydroacoustic monitoring. There’s six stations that use true underwater monitoring with triplets of hydrophones (so that signal direction can be determined via triangulation) which are very sensitive, but also very expensive (as hydrophones must be installed at a depth of approximately one kilometer, where sound transmission is best). There’s also five land based stations, which use seismographs on steeply sloped islands to detect the seismic waves underwater sounds make when they hit land. Land based monitoring is less accurate, but requires little in the way of specialized hardware, making it much cheaper.
In either case, data is streamed directly to CTBTO headquarters in Vienna, where it is analyzed and forwarded to states that are party to the CTB. At the CTBTO, the signal is split into different channels based on a known library of undersea sounds and explosions are separated from natural phenomena (like volcanos, tsunamis, and whales) and man-made noises (like gas exploration, commercial shipping, and military drills). Signal processing and analysis – especially of hydrophone data – is a very mature field, so the CTBTO doesn’t lacks for techniques to refine its estimates of events.
Infrasound monitoring stations are the last part of the global monitoring system and represent the best way for the CTBTO (rather than national governments with the resources to launch satellites) to detect atmospheric nuclear tests. Infrasound stations try to pick up the very low frequency sound waves created by nuclear explosions – and a host of other things, like volcanos, planes, and mining.
A key consideration with infrasound stations is reducing background noise. For this, being far away from human habitation and blocked from the wind is ideal. Whenever this cannot be accomplished (e.g. there’s very little cover from the wind in Antarctica, where several of the sixty stations are), more infrasound arrays are needed.
The components of the infrasound arrays look very weird.
What you see here are a bunch of pipes that all feed through to a central microbarometer, which is what actually measures the infrasound by detecting slight changes in air pressure. This setup filters out a lot of the wind noise and mostly just lets infrasound through.
Like the hydroacoustic monitoring system, data is sent to the CTBTO in real time and analyzed there, presumably drawing on a similar library of recorded nuclear test detonations and employing many of the same signal processing techniques.
Ongoing research into wind noise reduction might eventually make the whole set of stations much more sensitive than it is now. Still, even the current iteration of infrasound monitoring should be enough to detect any nuclear tests in the lower atmosphere.
The CTBTO has a truly great website that really helped me put together this blog post. They provide a basic overview of the four international monitoring systems I described here (they don’t cover space-based monitoring because it’s outside of their remit), as well as pictures, a glossary, and a primer on the analysis they do. If you’d like to read more about how the international monitoring system works and how it came into being, I recommend visiting their website.
This post, like many of the posts in my nuclear weapon series came about because someone asked me a question about nuclear weapons and I found I couldn’t answer quite as authoritatively as I would have liked. Consequently, I’d like to thank Cody Wild and Tessa Alexanian for giving me the impetus to write this.
Every day, there are conflicts between decision makers. These occur on the international scale (think the Cuban Missile Crisis), the provincial level (Ontario’s sex-ed curriculum anyone?) and the local level (Toronto’s bike lane kerfuffle). Conflict is inevitable. Understanding it, regrettably, is not.
The final results of many conflicts can look baffling from the outside. Why did the Soviet Union retreat in the Cuban missile crisis? Why do some laws pass and others die on the table?
The most powerful tool I have for understanding the ebb and flow of conflict is the Graph Model of Conflict Resolution (GMCR). I had the immense pleasure of learning about it under the tutelage of Professor Keith Hipel, one of its creators. Over the next few weeks, I’d like to share it with you.
GMCR is done in two stages, modelling and analysis.
To model a problem, there are four steps:
Select a point in time for the model
Make a list of the players and their options
Remove outcomes that don’t make sense
Create preference vectors for all players
The easiest way to understand this is to see it done.
Let’s look at the current nuclear stand-off on the Korean peninsula. I wrote this on Sunday, October 29th, 2017, so that’s the point in time we’ll use. To keep things from getting truly out of hand in our first example, let’s just focus on the US and North Korea (I’ll add in South Korea and China in a later post). What options does each side have?
Nuclear strike on North Korea
Withdraw troops and normalize relations
Invasion of South Korea
Abandon nuclear program and submit to inspections
I went through a few iterations here. I originally wrote the US option “Nuclear strike” as “Pre-emptive strike”. I changed it to be more general. A nuclear strike could be pre-emptive, but it also could be in response to North Korea invading South Korea.
It’s pretty easy to make a chart of all these states:
If you treat each action that the belligerents can make as a binary variable (yes=1 or no=0), the states will have a natural ordering based off of the binary sum of the actions taken and not taken. This specific ordering isn’t mandatory – you can use any ordering scheme you want – but I find it useful.
You may also notice that “Status quo” appears nowhere on this chart. That’s an interesting consequence of how actions are represented in the GMCR. Status quo is simply neither striking nor withdrawing for the US, or neither invading nor abandoning their nuclear program for North Korea. Adding an extra row for it would just result in us having to do more work in the next step, where we remove states that can’t exist.
I’ve colour coded some of the cells to help with this step. Removing nonsensical outcomes always requires a bit of judgement. Here we aren’t removing any outcomes that are highly dispreferred. We are supposed to restrict ourselves solely to removing outcomes that seem like they could never ever happen.
To that end, I’ve highlighted all cases where America withdraws troops and strikes North Korea. I’m interpreting “withdraw” here to mean more than just withdrawing troops – I think it would mean that the US would be withdrawing all forms of protection to South Korea. Given that, it wouldn’t make sense for the US to get involved in a nuclear war with North Korea while all the while loudly proclaiming that they don’t care what happens on the Korean peninsula. Not even Nixon’s “madman” diplomacy could encompass that.
On the other hand, I don’t think it’s necessarily impossible for North Korea to give up its nuclear weapons program and invade South Korea. There are a number of gambits where this might make sense – for example, it might believe that if they attacked South Korea after renouncing nuclear weapons, China might back them or the US would be unable to respond with nuclear missiles. Ultimately, I think these should be left in.
Here’s the revised state-space, with the twelve remaining states:
The next step is to figure out how each decision maker prioritizes the states. I’ve found it’s helpful at this point to tag each state with a short plain language explanation.
Nuclear strike by the US, NK keeps nuclear weapons
Unilateral US troop withdrawal
North Korean invasion with only conventional US responses
North Korean invasion with US nuclear strike
US withdrawal and North Korean Invasion
Unilateral North Korean abandonment of nuclear weapons
US strike and North Korean abandonment of nuclear weapons
Coordinated US withdrawal and NK abandonment of nuclear weapons
NK invasion after abandoning nuclear weapons; conventional US response
NK invasion after abandoning nuclear weapons; US nuclear strike
US withdrawal paired with NK nuclear weapons abandonment and invasion
While describing these, I’ve tried to avoid talking about causality. I didn’t describe s. 5 as “North Korean invasion in response to US nuclear strike” or “US nuclear strike in response to North Korean invasion”. Both of these are valid and would depend on which states preceded s. 5.
Looking at all of these states, here’s how I think both decision makers would order them (in order of most preferred to least preferred):
The US prefers North Korea give up its nuclear program and wants to keep protecting South Korea. Its secondary objective is to seem like a reasonable actor on the world stage – which means that it has some preference against using pre-emptive strikes or nuclear weapons on non-nuclear states.
North Korea wants to unify the Korean peninsula under its banner, protect itself against regime change, and end the sanctions its nuclear program has brought. Based on the Agreed Framework, I do think Korea would be willing to give up nuclear weapons in exchange for a normalization of relations with the US and sanctions relief.
Once we have preference vectors, we’ve modelled the problem. Now it’s time for stability analysis.
A state is stable for a player if it isn’t advantageous for the player to shift states. A state is globally stable if it is not advantageous for any player to shift states. When a player can move to a state they prefer over the current state without any input from their opponent, this is a “unilateral improvement” (UI).
There are a variety of ways we can define “advantageous”, which lead to various definitions of stability:
Nash Stability (R): Stable if the actor has no unilateral improvements. States that are Nash stable tend to be pretty bad; these include both sides attacking in a nuclear war or both prisoners defecting in the prisoner’s dilemma. Nash stability ignores the concept of risk; it will never move to a less preferred state in the hopes of making it to a more preferred state.
General Metarationality (GMR): Stable if the actor has no unilateral improvements that aren’t sanctioned by unilateral moves by others. This tends to lead to less confusing results than Nash stability; Cooperation in the prisoner’s dilemma is stable in General Metarationality. General Metarationality accepts the existence of risk, but refuses to take any.
Symmetric Metarationality (SMR): Stable if an actor has no unilateral improvements that aren’t sanctioned by opponents’ unilateral moves after it has a chance to respond to them. This is equivalent to GMR, but with a chance to respond. Here we start to see the capacity to take on some risk.
Sequential Stability (SEQ): Stable if the actor has no unilateral improvements that aren’t sanctioned by opponents’ unilateral improvements. This basically assumes fairly reasonable opponents, the type who won’t cut off their nose to spite their face. Your mileage may vary as to how appropriate this assumption is. Like SMR, this system takes on some risk.
Limited Move Stability (LS): A state is stable if after N moves and countermoves (with both sides acting optimally), there exists no improvement. This is obviously fairly risky as any assumptions you make about your opponents’ optimal actions may turn out to be wrong (or wishful thinking).
Non-myopic Stability (NM): Equivalent to Ls with N set equal to infinity. This predicts stable states where there’s no improvements after any amount of posturing and state changes, as long as both players act entirely optimally.
The two stability metrics most important to the GMCR (at least as I was taught it) are Nash Stability (denoted with r) and Sequential Stability (denoted with s). These have the advantage of being simple enough to calculate by hand while still explaining most real-world equilibria quite well.
To do stability analysis, you write out the preference vectors of both sides, along with any unilateral improvements that they can make. You then use this to decide the stability of each state for each player. If both players are stable at a state by any of the chosen stability metrics, the state overall is stable. A state can also be stable if both players have unilateral improvements from it that result in both ending up in a dispreferred state if taken simultaneously. This is called simultaneous sanctioning and is denoted with u.
The choice of stability metrics will determine which states are stable. If you only use Nash stability, you’ll get a different result than if you combine Sequential Stability and Nash Stability.
Here’s the stability analysis for this conflict (using Nash Stability and Sequential Stability):
Before talking about the outcome, I want to mention a few things.
Look at s. 9 for the US. They prefer s. 8 to s. 9 and the two differ only on a US move. Despite this, s. 8 isn’t a unilateral improvement over s. 9 for the US. This system is called the Graph Model of Conflict Resolution for a reason. States can be viewed as nodes on a directed graph, which implies that some nodes may not have a connection. Or, to put it in simpler terms, some actions can’t be taken back. Once the US has launched a nuclear strike, it cannot un-launch it.
This holds less true for abandoning a nuclear program or withdrawing troops; both of those are fairly easy to undo (as we found out after the collapse of the Agreed Framework). Invasions on the other hand are in a tricky category. They’re somewhat reversible (you can stop and pull out), but the consequences linger. Ultimately I’ll call them reversible, but note that this is debatable and the analysis could change if you change this assumption.
In a perfect world, I’d go through this exercise four or five different times, each time with different assumptions about preferences or the reversibility of certain states or with different stability metrics and see how each factor changes the results. My next blog post will go through this in detail.
The other thing to note here is the existence of simultaneous sanctioning. Both sides have a UI from s. 4; NK to s. 0 and the US to s. 5. Unfortunately, if you take these together, you get s. 1, which both sides disprefer to s. 4. This means that once a war starts the US will be hesitant to launch a nuclear strike and North Korea would be hesitant to withdraw – in case they withdrew just as a strike happened. In reality, we get around double binds like this with negotiated truces – or unilateral ultimatums (e.g. “withdraw by 08:00 tomorrow or we will use nuclear weapons”).
There are four stable equilibria in this conflict:
The status quo
A coordinated US withdrawal of troops (but not a complete withdrawal of US interest) and North Korean renouncement of nuclear weapons
All out conventional war on the Korean Peninsula
All out nuclear war on the Korean Peninsula
I don’t think these equilibria are particularly controversial. The status quo has held for a long time, which would be impossible if it wasn’t a stable equilibrium. Meanwhile, s. 10 looks kind of similar to the Iran deal, with the US removing sanctions and doing some amount of normalization in exchange for the end of Iran’s nuclear program. State 5 is the worst-case scenario that we all know is possible.
Because we’re currently in a stable state, it seems unlikely that we’ll shift to one of the other states that could exist. In actuality, there are a few ways this could happen. A third party could intervene with its own preference vectors and shake up the equilibrium. For example, China could use the threat of economic sanctions (or the threat of ending economic sanctions) to try and get North Korea and the US to come to a détente. There also could be an error in judgement on the part of one of the parties. A false alarm could quickly turn into a very real conflict. It’s also possible that one party could mistake the others preferences, leading to them taking a course of action that they incorrectly believe isn’t sanctioned.
In future posts, I plan to show how these can all be taken into account, using the GMCR framework for Third Party Intervention and Coalitional Analysis, Strength of Preferences, and Hypergame Analysis.
Even without those additions, the GMCR is a powerful tool. I encourage you to try it out for other conflicts and see what the results are. I certainly found that the best way to really understand it was to run it a few times.
Note: I know it’s hard to play around with the charts when they’re embedded as images. You can see copyable versions of them here.
“We knew the world would not be the same. A few people laughed, a few people cried, most people were silent. I remembered the line from the Hindu scripture, the Bhagavad-Gita… ‘Now, I am become Death, the destroyer of worlds.'” – J Robert Oppenheimer, on the reaction to the successful test of the first atomic bomb.
Because I keep talking about it piecemeal with various people and wanted to collect everything I’ve said in one place. Because some people are more scared then they need to be and some people are more blasé than they really should be. Because I care about elevating the level of the discourse (which is often really poor). Because I’m scared that people might actually endorse some of the really terrible proposed solutions to this crisis and I want them to understand why they won’t work.
The real experts are currently busy briefing politicians and making clipped statements to the media. Therefore, it falls to verbose hobbyists like myself to try and make sense of every cryptic utterance and disseminate some of what the experts are saying more widely.
1.3 Why is does North Korea have a nuclear program anyway?
There are a lot of theories here. I’m going to walk you through my favourite. See these men?
Pictured: Muammar Gaddafi and Saddam Hussein. Images courtesy of Wikipedia Commons.
Both of those men once ran countries. Now they’re deposed and dead. The common factor? America. Call it imperialism. Call it empire building. Call it promoting democracy or protecting freedom. Call it exacting justice on two terrible butchers. From one perspective or another, all of those are the truth. What matters to North Korea is that these men tangled with America, they didn’t have nuclear weapons, and now they’re dead.
As far as I know (and the bloody purges at the start of his reign probably attest to this), Kim Jong-un doesn’t want to die. If he has a nuclear deterrent, he might fancy himself safe from any American led attempts at regime change and/or ending his horrific prison camp system.
The Saddam Hussein regime in Iraq and the Gaddafi regime in Libya could not escape the fate of destruction after being deprived of their foundations for nuclear development and giving up nuclear programmes of their own accord
2.1 What should I know about nuclear weapons to understand this crisis?
It can be helpful to understand a bit about how nuclear weapons work before reading about using them. Here’s a very quick and slightly simplified rundown.
Nuclear weapons liberate energy from the nuclei of atoms. These can’t just be any atoms. You need the right version of the right atom to get a nuclear reaction. The ones relevant here are deuterium and tritium (forms of hydrogen with additional neutrons), plutonium-239 (commonly called “weapon grade plutonium”) and uranium-235 (“highly enriched uranium”).
There are two types of atomic reactions used in nuclear bombs. In fission weapons, plutonium or uranium atoms are split apart by the energy of a free neutron. This releases more neutrons setting in motion an unstoppable chain reaction (until the energy of it blows the fuel apart). The reaction is started by creating a critical mass. Weapon grade plutonium and highly enriched uranium are inherently unstable; at any given moment, a small number of atoms of either will be breaking apart, releasing neutrons. Get a large amount of either in one place (or compress an existing sample with explosives) and you’ll have enough neutrons to start the reaction.
Fusion is the opposite. In fusion, you slam two atoms together so hard that they merge. In fusion weapons, the fuel is a mix of deuterium and tritium (or a molecule called lithium deuteride, that turns into deuterium and tritium when exposed to neutrons). When you push these together hard enough, you get helium, energy, and a very, very energetic neutron. This neutron can then start fission reactions. In many thermonuclear weapons the true destructive power comes after these neutrons hit a very large outer shell of uranium, which then fissions very violently.
Fusion weapons are often called hydrogen bombs, because isotopes of hydrogen are used in them, or thermonuclear weapons, because high temperatures (among other things) are used to initiate the process of fusion. Not all bombs that use fusion are as destructive as “true” thermonuclear weapons (i.e. the things experts normally mean when they say “thermonuclear weapons”). It is possible to put a bit of deuterium and tritium into an “ordinary” fission bomb in order to generate some extra neutrons from fusion and speed up the chain reaction. This allows for more of the fuel to be used before it scatters itself around the landscape and increases the yield of the bomb.
Yields are commonly measured in kilotons (kt; equivalent to 1000 tons of TNT) or megatons (Mt; equivalent to 1,000,000 tons of TNT). A kiloton bomb is enough to do serious damage to a large city. A megaton bomb will utterly devastate it. Yields vary widely with design, but in general you’d expect a simple fission weapon to yield somewhere between 5 and 50 kilotons; a boosted weapon would normally yield between 25 and 150 kt; a fusion weapon can yield anywhere from 50 kilotons to 50 megatons. These ranges are just guidelines and have to do more with what is an efficient use of nuclear materials than anything else; you could make a one megaton boosted fission bomb (although that actually is the upper limit on what you can do without multi-stage fusion), but this would be very wasteful compared to creating a similarly destructive thermonuclear weapon.
Having a high yield in a small package is very important for miniaturization, the process of making a functioning atomic bomb small enough for delivery on a missile. When it comes to missiles, the smaller (and lighter) the warhead, the better. A lighter warhead allows a missile to travel further, a key requirement for countries like North Korea or America, with very distant adversaries.
North Korea successfully tested a missile in July with a range of 10,000 km (6,210 miles). This range is enough to reach the continental US and classify the missile as an Intercontinental Ballistic Missile (ICBM). In addition, a missile tested in 2016 had a range of 12,000km (7,450 miles).
The United States has successfully shot down mock intermediate and medium range ballistic missile (IRBM/MRBM) in tests of its Terminal High Altitude Area Defense (THAAD) anti-missile system and mock ICBMs with its Ground-Based Midcourse Defense (GBM) anti-missile system.
North Korea claims that their nuclear weapons (including this latest one) are small enough to be mounted on their missiles (i.e. successfully miniaturized). Leaked intelligence suggests some of their earlier bombs are, but it’s unclear if that applies to this latest one as well.
It is unknown if US ground missile defense systems could successfully intercept an ICBM aimed at the continental United States or IRBM/MRBM aimed at US possessions or allies closer to North Korea (e.g. Hawaii, Guam, Japan, South Korea).
2.3 What are your best guesses for what we don’t know?
Oh my. Please remember that these are guesses.
2.3.1 Is this weapon fusion or boosted fission?
We won’t know for sure if the weapon the North Koreans detonated was “merely” a boosted fission bomb or a multistage fusion bomb until isotope analysis is completed (and even then, the results could be inconclusive or unreleased). I’m unwilling to hazard a guess here because I can make a plausible case either way. On one hand, boosted fission seems likely because it’s much easier than staged thermonuclear weapons. On the other, the North Koreans previously claimed to have detonated a thermonuclear bomb that clearly fizzled (if it indeed had a fusion stage). It doesn’t seem impossible that this failed test gave them the information necessary to make a successful multi-stage thermonuclear weapon.
I previously mentioned that testing would be necessary before any country could hope to reliably deploy multi-stage thermonuclear weapons. This is because there are a lot of unknowns in these weapons and it is hard to get them right. It’s much less surprising to see a country get their staged thermonuclear bomb right on the second try than it would be had they done it on their first.
There’s one final possibility, although it seems less likely. North Korea could have resurrected the old Sloika (layer-cake) nuclear weapon design. This is technically a thermonuclear weapon, but it requires a disproportionally (compared to its power) large mass of high explosives to work and lacks many of the desirable properties of the more conventional (staged) Teller-Ulam design (like the ability to chain as many additional stages as you’d like). The Sloika is currently regarded as a dead end in weapon development, but if the North wanted an impressive explosion to scare off the Americans and didn’t have any intent to ever put it on a rocket, it might be a good choice for them.
I don’t know. I want to believe that they haven’t successfully miniaturized this device (and that Kim Jong-un is posing with a fake in this picture). The first successful detonation of a multi-stage thermonuclear weapon required an 82-ton facility (Soviets mocked it as a “thermonuclear installation”). I find it hard to believe that in less than a year, North Korea could go from miniaturizing fission weapons to miniaturizing thermonuclear weapons, but it is possible that they have.
The recently released picture of Kim Jong-un with a “nuclear weapon” is certainly supposed to evoke a miniaturized multi-stage weapon. The distinct double humped shape (compare it to the single sphere of last year’s “disco ball of death“) suggests that there are two separate stages.
But this is a propaganda shot. Literally anything could be inside the enclosure in the pictures North Korea released (I actually think fissile material is the least likely thing to be in there, just based on how close Jong-un is to the thing; which isn’t to say that it couldn’t be identical in appearance to their actual weapons). It could be a true representation of their latest nuclear weapon designs, or it could be filled with lead. No one but Jong-un and his propagandists and senior subordinates know for sure.
Last year, North Korea claimed that a 10kt detonation was the successful test of a thermonuclear weapon capable of destroying the entire United States. We can’t trust official pronouncements about their nuclear weapons program. We can only trust the scarce scraps of hard evidence they leave.
So, in this case, I think we’re going to have to wait for more US intelligence leaks before we know either way.
2.3.3 Does the heat shield work?
It might depend on the payload. Doctor John Schilling, writing for 38 North (a North Korea focused blog run by Johns Hopkins) believes that the heat shield failed for one of the two ICBM tests this summer. He thinks that North Korea has a successfully tested a heat shield that will work with very light payloads, but has been unsuccessful building one suitable for heavier payloads (such a heatshield would need to be rather light itself).
Depending on the mass of the miniaturized North Korean bombs, they might have a heatshield suitable for striking targets on America’s east coast, or they might not be able to reach even there. It does seem likely that they can reach Hawaii or Alaska with their current proven heat shield design.
North Korea has every incentive to play down the mass of their weapons and play up the strength of their heat shield, which is what makes determining the likelihood they can successfully strike America so challenging.
2.3.4 Can THAAD and GCM defend America (and its allies)?
THAAD and GCM have both succeeded in their last few tests, but it’s unclear how closely these tests mimic reality. Unfortunately, success is relatively new for the GCM system. Previously it’s failed about as often as it has succeeded. Real missiles will probably be even harder to successfully target than the dummies it’s been tested on.
THAAD has been fairly reliable, at least in its last few tests. But it is currently only deployed to protect a few US bases in Korea. Seoul is not within its range and even if it was, THAAD wouldn’t be able to protect the South Korean capital (and its millions of inhabitants) from the conventional artillery aimed at it by North Korea. There are also THAAD launchers in Guam, Hawaii, and Alaska, giving those territories some modicum of protection.
I honestly don’t know what probability to assign to these systems making a successful interception of a North Korean missile. I think the THAAD is more likely to succeed than the GCM, but I have no hard numbers to put on either.
North Korea’s nuclear program has existed for more than three decades. But for many people, the latest tests are the first time they’ve really sat up and taken notice. To a certain extent, this makes sense. Before Kim Jong-un took over from his father, there had only been two nuclear tests and both of them were of fairly small bombs (the first was under 2kt, the second under 5kt).
If this is the first you’re seriously hearing about the crisis, it can help to get some of the historical context.
3.1 How expensive has the program been?
That’s a hard question to answer. The total cost direct cost is possibly between $1.1 billion and $3.2 billion, but it’s really hard to put hard numbers on anything that goes on in North Korea.
In addition to whatever North Kore has actually paid for its program, there’s the indirect costs. The program has led to international sanctions, the latest round of which will cost North Korea something like a billion dollars in exports. That doesn’t necessarily mean that their economy will shrink by a billion dollars though. The economic capacity that was consumed by the exports will still exist, but it will have to be used less efficiently (and may suffer from shortages of raw materials purchased with those exports). It will become harder for North Korea to acquire anything that it itself cannot produce and it will become less able to import food in the event of a famine or poor harvest. Those are both costly.
There’s also the opportunity cost. North Korea is incredibly impoverished, such that $1-3 billion dollars represents 3.5% to 10.5% of its entire yearly economic output. Had this been invested in a more economically useful fashion (e.g. in manufacturing or mining) North Korea would probably have a higher GDP. The opportunity cost of using this money in such a wasteful way cannot help but compound – that is to say the gap between what is and what could have been will only grow larger.
Here, I think a qualitative answer is best. The nuclear program has been incredibly expensive, but also – given that it is an excellent shield against regime change – worth it, at least from the perspective of Kim John-un.
3.2 Okay, but it’s cheap compared to the $61.3 billion the US spent on nuclear weapons in 2011. How can they get so much with so little?
I can think of two reasons for the discrepancy. First, the Manhattan Project created nuclear weapons from scratch. When the Manhattan Project started, nuclear weapons really were just a theoretical pipe dream. By demonstrating that nuclear weapons were possible, the Manhattan Program removed the theoretical question entirely.
But the Manhattan Project helped in ways beyond just demonstrating the technology was possible. Many other nuclear programs got help directly or indirectly from Manhattan Project scientists. Even the Soviet Union relied on the Manhattan Project to jump start their own nuclear weapons program (via the spy Klaus Fuchs, among others).
Of the nuclear powers, only America and India completed their nuclear programs without outside assistance, spies in other nuclear programs, or researcher exchanges. South Africa received assistance from Israel (and possibly France). Israel got assistance from France. France and the UK had scientists participate in the Manhattan Project. China got assistance from the USSR. The USSR conducted the aforementioned spying on the Manhattan Project. Pakistan received assistance from China (and possibly the United States) and in turn provided assistance to North Korea.
The other reason for the cheap price tag is domestic. In America, the government cannot force scientists or labourers to work on atomic weapons and must pay a wage commensurate with each employee’s skills. The American government cannot force someone who finds atomic weapons distasteful to work on them against their will. For example, Joseph Roblatt was able to leave the Manhattan Project, even in the middle of all the paranoia stirred up by World War II.
North Koreans have none of that luxury. They work for whatever pittance the government chooses to give them and are executed or sent to prison camps if they refuse. There is no room for conscientious objectors or for negotiating on salary. Put plainly, the North Korean nuclear program is much cheaper than other nuclear programs because it is underlain with slavery and coercion.
3.3 How did things get so bad?
To rip off one of my favourite authors, “slowly, then all at once”.
There was an agreement to denuclearize North Korea signed by Clinton and Kim Jong-Il in 1994, when the North first began to make progress on its nuclear program. This agreement would have provided the North with proliferation-resistant nuclear power plants and free oil as those new power plants were constructed, as well as eventual sanctions relief and normalization of relations with the United States and South Korea. In return for this, North Korea agreed to remain bound by the Non-Proliferation Treaty (NPT) and submit to monitoring of its nuclear sites.
But this wasn’t a fully binding treaty and congress never secured the funds (it was signed right before the first midterm election of Clinton’s presidency, where Republicans took back the house). Delays repeatedly occurred on the American side and I’m not sure that the North Koreans ever fully suspended their nuclear program. No normalization of relations occurred, no sanctions were lifted, and George W. Bush eventually cancelled the agreement. North Korea soon announced that they were again developing nuclear weapons.
The nuclear program rapidly accelerated after Kim Jong-Il’s death in 2011. I’m of two minds about this. I’ve seen people claim that Jong-un has poured resources into the program to help prop up his standing internally, which certainly seems in keeping with his self-preservation instinct. But I also wonder if this could just be the natural result of North Korean scientists becoming more experienced and proficient in nuclear weapons production.
Either way, there have been four nuclear tests since Jong-un took power, three of them since 2016. The rapidity of these recent tests, their pairing with tests of missiles, and Trump’s bellicose response have combined to make the stand-off feel much direr than it has been at any other point in my life.
3.4 How many nuclear weapons does North Korea have? How does this compare to the US?
North Korea’s nuclear warhead count is unclear, but estimates range from 12 to 60.
There’s a big difference between prepared warheads, unassembled potential warheads in storage, and fissile material that can be used in warheads. When people estimate the number of warheads, they’re normally estimating the fissile material that the North Koreans could possess, probably assuming it’s all eventually going to active warheads. This assumption could be wrong if something other than fissile material – maybe highly technical bomb components? – is actually the limiting factor in North Korean warhead production.
The US has 1,550 active warheads . These are the warheads that could be quickly deployed. The rest of its stockpile is in various states of readiness. I think some of them could be used relatively quickly (i.e. in a day or two), while others could be used only after a significant amount of refurbishment or preparation.
If North Korea has many active warheads (e.g. 60), an American first strike becomes impractical. It would be very hard to guarantee that all of them were destroyed (thereby preventing retaliatory strikes against the US or US troops in South Korea). Inactive nuclear weapons would still present a threat in the aftermath of a successful first strike, but it’s a threat that can be mitigated by sufficient damage to the chain of command or the logistic structure of the North Korean army.
Likewise, raw fissile material can be mostly neutralized as a threat by eliminating the state infrastructure necessary to turn it into finished warheads (it could still be used to create dirty bombs, but these are far less of a threat than nuclear warheads). It takes labour and speciality components to turn enriched fissile material into a reliable and functional weapon, prerequisites that are difficult to fulfill if the state that normally supplies them has collapsed.
I should also mention that very few (if any) of North Korea’s active warheads will be similar to the most recent test detonation. Many of their weapons will be relatively weak pure fission devices (similar in strength to their previous nuclear tests). Now that they have a warhead capable of ~150kt yields, they’ll certainly try and ramp up production of it (assuming that it’s at all practically useful and doesn’t weigh several tonnes), but that will take time.
Some experts seem to think that North Korea has much more access to enriched uranium than plutonium. This will further slow down their ability to build new weapons in the ~150kt range, at least if they want those weapons to be miniaturized .
3.5 How bad would it be if North Korea used nuclear weapons?
The latest North Korean weapon would (if it actually had a yield of 150kt and these casualty estimates are accurate) kill almost 300,000 people in LA, 270,000 people in SF, about 550,000 people in Tokyo, or 490,000 people in Seoul. If you want to get a sense of the destruction, you can play around with it on NukeMap. For cities on the US West Coast or in Asia and Europe, use a ~150kt bomb. For the East Coast, a 5-20kt bomb is probably more realistic (if one can be delivered at all) .
The danger is greatest for South Korea and Japan. Their cities are much denser (so nuclear weapons are more devastating) and much closer to North Korea (making it easy for the North Koreans to deliver larger warheads on missiles). There is also less in the way of missile defenses protecting major Asian cities, making bombs aimed at them much more likely to succeed.
That said, if North Korea ever used nuclear weapons, the greatest loss of life would be inside North Korea.
Each Ohio class submarine can carry several times as many warheads as North Korea possesses. One Ohio-class submarine with a full complement of warheads has almost the same nuclear arsenal as France.
If an Ohio class submarine were to unleash its payload on North Korea, the country would cease to exist in any meaningful way. Every single major popular centre would be irrevocably devastated. It would be destruction unlike the world has ever seen. It would make Hiroshima and Nagasaki look like child’s play. It would be the scourging of an entire country with nuclear hellfire.
Trump’s speech, where he promised “fire and fury unlike the world has ever seen” wasn’t hyperbole. It was a statement of fact. A single US nuclear ballistic missile submarine could easily make good on his threats. A single US nuclear tipped missile could make good on his threats.
(There are 14 Ohio class submarines, by the way.)
3.5.1 I’ve heard that nuclear weapons cause an electromagnetic pulse (EMP). How much damage could North Korea do with this?
Like most questions about nuclear weapon damage, this depends on several factors.
First, there’s a common misconception that a normal anti-material nuclear detonation (e.g. one within a few kilometers of the ground) creates an EMP effect that can do widespread damage. This is technically true – there is a large EMP effect – but practically irrelevant because the electromagnetic pulse will only really affect areas already ravaged by the bomb. Absent the other effects, it certainly would do significant damage, but it’s hard to think of a case where the most damage to a city attacked by a nuclear weapon will come from the EMP.
The strength of this electromagnetic pulse depends on the type of bomb, its altitude, and the local strength of the magnetic field (the stronger the field, the stronger the EMP). The ideal nuclear weapon for producing EMP effects is a single stage weapon that produces a greater-than-average portion of its energy output in the form of gamma radiation and does this as quickly as possible .
I don’t think North Korea has resources to invest in optimising for EMP effects. Development would probably require tests, which themselves require an expenditure of the government’s limited stockpile of fissile material. Since cost-effective and material-effective EMP weapons are normally single stage, North Korea would risk weakening their deterrent posture if they conducted these tests (to the US listening in with seismographs, it would look like they had regressed in their program and were failing to achieve fusion).
It also appears that most electronics, especially unplugged electronics would survive an EMP almost entirely unscathed. Computers, phones, and cars would largely be undamaged, but power lines would be heavily affected. This would be bad, but also probably not irrecoverable. A bunch of things would have to go horribly wrong for an EMP attack on America to cause more casualties than a thermonuclear attack on a large city. For this reason, I suspect North Korea’s would favour attacking population centres in any retaliatory second strike over high altitude EMP-producing bursts.
3.6 How do we get North Korea to give up its nuclear weapons program?
That is the most important question. President Trump likes to assert that China could get North Korea to stop. I once thought this was true, but I’ve abandoned that position as I’ve become better informed on the topic. If we give up on the idea that China can magically get North Korea to stop, it’s difficult to conceptualize North Korea giving up its weapons program. We don’t have a lot of examples of this occurring; the only singular history has to give us come from South Africa, which was briefly a nuclear power but later gave up its weapons. The parallels – both were international pariahs who felt weapons were necessary against an encroaching threat – offer perhaps the only blueprint for the denuclearization of the Korean peninsula.
3.6.1 How come China can’t make North Korea stop?
China once saw North Korea as a buffer against American influence or aggression. North Korea was the fifth Chinese buffer zone  – one of the client kingdoms that surround the Han heartlands of the state. To some extent, that’s still true. North Korea does provide a buffer between American allied South Korea and China. But at this point, North Korea is also a significant threat to China’s security.
The relationship between China and North Korea has significantly deteriorated since Kim Jong-un became leader. Jang Song Thaek – the uncle that Jong-un had executed – was one of the primary conduits for diplomacy between Pyongyang and Beijing. With his death, bilateral relations are largely stalled. Apparently, China hasn’t even been able to send an envoy to North Korea in more than a year.
Even before that though, mistrust characterized the relationship between Beijing and Pyongyang (on both sides). Kim Il-sung was almost executed by the Chinese communist party early in his life. Additionall disputes arose between the two countries during the Korean war and many of them haven’t been resolved since. There were even border skirmishes between the two nations in the late 1960s (I fact I didn’t know until I began researching for this section).
I don’t know why I didn’t realize this until I had it pointed out to me by 38north.org, but throughout history, client kingdom relationships have rarely been characterized by meek submission on the part of the client . If you want an example of a heavily dependent ally that America cannot effectively control, look no further than Benjamin Netanyahu of Israel. In addition to ignoring American requests to stop settlements, he resolutely opposed Obama, even crossing the normal red line of meddling in American domestic politics. Why should we expect China’s client states to behave any differently than America’s?
At this point, China seems to believe they’ve lost any ability to control North Korea. They responded to the latest North Korean missile test with the test of an anti-ballistic missile system of their own. The location of this system? Between North Korea and Beijing. This is not something allies do. This isn’t even something that disinterested parties do. Pakistan and the UK both have nuclear weapons, but the US has put no effort into building missile defenses against either of them. China fears and mistrusts North Korea more than the United States fears and mistrusts Pakistan (which incidentally is also another excellent example of a rocky relationship between client and suzerain).
All of this means that a solution for the present crisis will not come only from Beijing. The engagement of Beijing is key to bringing North Korea to the table – we can’t accomplish anything without them – but we can no longer foist responsibility for North Korea onto China.
3.6.2 Why did South Africa end its nuclear weapons program?
In the 1970s South Africa was internationally isolated. It was banned from major sporting events and faced coordinated economic and military sanctions. It was fighting two separate guerilla wars and one conventional war. Thanks to intervention by Cuba and the USSR, (white) South Africans legitimately felt like they might soon be overrun by communists.
In this climate, they saw nuclear weapons as a salvation and a guarantee of independence. They could not use nuclear weapons to pacify their own people, but they thought that nuclear weapons might buy them breathing room and permanent protection from communism. For this, a token nuclear deterrent was enough – it’s unclear if their weapons were even usable, or if they intended to use the threat of them to prompt international aid if their borders were ever threatened .
There was good reason for the world to sanction South Africa. Its apartheid system was despicable. It conducted one of the largest forced removals of people in history. It had a government without any principled claim to legitimacy. It was at war with its neighbours and had banned all dissent from its black citizens.
Many in South Africa wanted to prop up the system indefinitely. Many knew they were complicit in a great evil, but they feared death if apartheid were ever to unravel.
Does any of this sound familiar? South Africa had the same foundational paranoia that North Korea’s Kim dynasty currently possesses.
Here’s what happened. The sanctions – especially the sports bans – took their toll, demoralizing white South Africans. The Soviet Union fell, ending communism as an existential threat. Demographics forced the government to realized that they could only fight the tide of history for so long. F.W. de Klerk negotiated peace with the Angolans, the Namibians, the Cubans, and the ANC. He secured immunity for the state actors that had propped up apartheid. Then he dismantled his country’s nuclear weapons, followed shortly by his government.
This, I think, is the blueprint we must follow for North Korea. We should follow it not because it’s particularly attractive, but because it is the only blueprint we have.
3.6.3 How could we convince North Korea to give up its nuclear weapons?
First, the Americans and North Koreans would have to accept the current Chinese proposal, which would see North Korea pause its nuclear program and the US cancel joint military exercises with South Korea. This is actually similar in principle to the trilateral treaty that ended the conflict in Namibia and Angola. As a result of that treaty South Africa withdrew its forces, Cuba did the same, and Namibia ran democratic elections.
If there’s any backsliding or reluctance at all on the part of North Korea, then we can use some of the sticks that were particularly effective against South Africa, especially the sports ban (which seriously demoralized white South Africans). North Korea is currently allowed to compete in both the Olympics and FIFA. That should change. For as long as nuclear tests continue, all North Korean athletes should be banned from international competition. The North Korean government cares a lot about its successes in athletics (seeing them as proof of the power of juche), so taking that away from them would be a potent psychological blow.
If an American suspension of military drills fails to bring North Korea to the table, America will have strengthened its position with China at the same time as North Korea presents yet another embarrassment to Beijing. This will make it easier to coordinate even more damaging sanctions on Pyongyang. If Jong-un continues on this path, he risks well and truly alienating China, which would deeply cripple North Korea’s economy. I think at some point (e.g. if China gets pissed off enough that it threatens to stop guaranteeing North Korea against an attack), Kim Jong-un would have to blink and start bargaining with the powers arrayed against him
There are two paths that can be followed once the North freezes its nuclear program and America abandons its military drills. In the first, we can go back to where we were in the 1990s, but this time do it right. I’m personally pessimistic that this can lead to long term security, because totalitarian regimes and democracies can almost never co-exist, especially side by side. If North Korea remains under juche, some conflict with America will eventually escalate, ruin any existing deal, and lead to renewal of weapon’s research. I’m not opposed to buying time (every day where North Korea and America aren’t on a hair trigger is a day where far fewer people are at risk of dying!), but I’d also like to see this conflict settled for good.
Hence, the second path. It starts off like the first, with the world steadily upping the pressure on Kim Jong-un. But here, instead of just making this about nuclear weapons, we make it personal and we offer him a personal escape from his current situation . A guaranteed life of ease may not be owning a country, but it competes favourably with being dead. The goal here would be to remove Jong-un and replace him with someone able to undertake the Korean equivalent of the Khrushchev Thaw or Deng Xiaoping’s reforms.
This would go hand in hand with the negotiations following the suspension of military drills and might involve the following:
America removes all of its troops from South Korea
Kim steps down as Supreme Leader. He and all of his cronies are guaranteed a state pension for as long as they live.
North Korea agrees to abandon its nuclear program and accedes to the NPT and (after verification of the programs dismantling) the NSG.
A transitional government is put in place in North Korea. Realistically, this government will have to be heavily influenced by Beijing, but that shouldn’t rule out eventual re-unification.
I hate this plan. The only end that feels fitting for Kim Jong-un involves a firing squad.
A nuclear war between North Korea and America will (at a minimum) kill millions. Every day that tensions remain this high on the peninsula risks that eventuality. The current state of uneasy paranoia is unacceptably dangerous . Even a more stable stand-off, punctuated by brief periods of tensions this bad is too much of a risk.
North Koreans are not served by Kim Jong-un walking free and never facing justice. But they’re served even less by dying in a country turned into a conflagration.
I don’t know if this plan could work. I don’t know if there’s the political will. I don’t know if Trump or Jong-un can thread the needle, or walk the knife’s edge, or whatever metaphor you want to use for what would be an intensely difficult process. But I’m convinced that this plan, or something similar is the only way we can permanently de-escalate tensions on the Korean peninsula and remove North Korean weapons of mass destruction.
That’s the other reason I wrote this FAQ. Because I want people to have all of the context for this crisis. I want you to understand the true scope of devastation that any military response to North Korea would entail. I want you to understand that China cannot control North Korea. I want you to understand that missile defense is cold comfort. I want you to understand that we have done this before and we can do it again but that it will be hard and unsatisfying.
If you’ve made it this far, I have a favour to ask of you. Check my work. Make sure what I’ve written is correct. If I’m wrong, help me to understand this crisis even better. And if it checks out, tell other people what we know. Spread it as far as you can. Tell your friends, your coworkers. Tell your parents, your children. Help people understand what we have to do.
 For illustrative purposes, note that this means 175,000 to 380,000 fatalities if detonated above downtown LA or 270,000 to 760,000 fatalities if detonated above downtown Tokyo. For more on yield, see my post on nuclear weapon effects. ^
 If it is following the limits outlined in the New START treaty with Russia. ^
 It requires much more in the way of conventional explosives to compress a uranium primary than a plutonium primary. Uranium has a higher critical mass than plutonium, which has the consequence of requiring a greater initial mass or greater compression before fission can be obtained. Either way, this requires more explosives to start the thing. My understanding is that multi-stage fusion bombs are never started with gun-type primaries, making implosion a necessity and eliminating one option for making uranium weapons more explosive-efficient. If you want to efficiently miniaturize a bomb, you need to bring along as little conventional explosives as possible. It’s this need that has driven technologies like boosted fission. ^
 For maximum casualties, use an airburst. To see fallout, use a surface burst. Airbursts are favoured against soft targets, like cities, ports, and military bases. Ground bursts are used against hardened targets, like nuclear silos or government bunker complexes.
In large nuclear weapons (and 150kt is large by any reasonable standard), most of the fatalities come from the shockwave and thermal radiation (as opposed to the central fireball or prompt radiation exposure). When a bomb is detonated closer to the ground, there’s much less of a shockwave and fewer people are exposed to dangerous thermal radiation, but some of the soil becomes radioactive and is dispersed as dangerous fallout. ^
 I don’t know this for sure, because undisclosed. But I would bet several thousand dollars that one is there. ^
 Missiles with multiple warheads mount them on multiple independent re-entry vehicles, or MIRVs. I’ve seen this verbed, as in “those missiles were MIRVed with eight warheads each”. Each re-entry vehicle can pick an independent target (within some radius of the initial target) as it re-enters the atmosphere. Hence the name.
Technically, the Trident II missiles can carry 14 MIRVed warheads, but treaties limit them to 8. Both the US and Russia are allowed (by bilateral treaty) to have up to 288 nuclear tipped sub-launched ballistic missiles (SLBMs), with up to 1152 warheads carried on those missiles (this is in addition to the maximum number of warheads allowed per missile). ^
 Gamma rays cause electromagnetic pulses by ionizing electrons in the upper atmosphere. These electrons circle magnetic field lines, producing a large oscillating electric and magnetic field, as well as acting as a giant coordinated synchrotron array. The gamma rays emitted from these synchrotrons cause a second, longer lasting and less intense pulse that can nonetheless damage systems weakened by the first pulse. ^
Single stage weapons more efficiently produce EMPS (compared to multi-stage weapons) because the first stage of multi-stage weapons can pre-ionize the air before gamma rays from the second stage reach it. Once air is ionized, the EMP will likely induce an opposite direction current in it, which will cancel out some of the EMP effect.
When gamma rays are produced extremely quickly (here, “quickly” really means “with little gap between production of the first and production of the last”), there is little chance for this opposite current to reduce the strength of the pulse. ^
 The reason for this is almost always domestic. While it might be better for a country as a whole to reap the benefits of a close relationship with their protector, this is often hard for the leader of a country to pull off without appearing to be a foreign puppet (which is the sort of thing that leads to losing elections or dying in a coup, depending on how political systems are set up to transfer power). Seen this way, Kim Jong-un’s domestic paranoia is one of the driving forces of his estrangement from Beijing. See also The Iron Law of Institutions. ^
 This isn’t without precedent. During the Yom Kippur war, Israel assembled several nuclear weapons in plain view of US intelligence gathering assets. This is thought to have contributed (although it is unclear how much) to the subsequent American decision to re-supply Israel, replenishing its material losses from the early stages of the war. ^
 Offering an attractive escape is key. Ratcheting up the pressure without one just makes nuclear war more likely. We’re competing here with “90% chance I get to keep running my country, 10% chance I die horribly”, or the like. If we can’t make an offer that can attractively compete with this, we should avoid squeezing Kim too tightly, just in case he reacts (apocalyptically) poorly. ^
 When tensions are this high, accidents can easily start nuclear wars. Accidents happen. Let’s say (and I do not particularly believe these numbers are correct, but they are illustrative) you expect one accident a year and 30% of accidents cause a nuclear war. After five years, there is an 83% chance that a nuclear war will have broken out. It’s this small but consistent chance for a horrendous death toll that I so desperately want us to avoid. ^
I’ve been ranting to random people all week about how much I love the Westminster System of parliamentary government (most notably used in Canada, Australia, New Zealand, and the UK) and figured it was about time to write my rant down for broader consumption.
Here’s three reasons why the Westminster System is so much better than the abominable hodgepodge Americans call a government and all the other dysfunctional presidential republics the world over.
1. The head of state and head of government are separate
And more importantly, the head of state is a figurehead.
The president is an odd dual-role, both head of government (and therefore responsible for running the executive branch and implementing the policies of the government) and head of state (the face of the nation at home and abroad; the person who is supposed to serve as a symbol of national unity and moral authority). In Westminster democracies, these roles are split up. The Prime Minister serves as head of government and directs the executive branch, while the Queen (or her representative) serves as head of state . Insofar as the government is personified in anyone, it is personified in a non-partisan person with a circumscribed role.
This is an excellent protection against populism. There is no one person who can gather the mob to them and offer the solutions to all problems, because the office of the head of state is explicitly anti-populist . In Westminster governments, any attempt at crude populism on the part of the prime minister can be countered by messages of national unity from the head of state .
It’s also much easier to remove the head of government in the Westminster system. Unlike the president, the prime minister serves only while they have the confidence of parliament and their party. An unpopular prime minister can be easily replaced, as Australia seems happy to demonstrate overandover. A figure like Trump could not be prime minister if their parliamentarians did not like them.
This feature is at risk from open nominating contests and especially rules that don’t allow MPs to pick the interim leader during a leadership race. In this regard, Australia is doing a much better job at exemplifying the virtues of the Westminster system than Canada or the UK (where Corbyn’s vote share is all the more surprising for how much internal strife his election caused) .
To the Commonwealth, one of the most confusing features of American democracy is its (semi-)regular government shut downs, like the one Trump had planned for September. On the other side, Americans are baffled at the seemingly random elections that Commonwealth countries have.
Her Majesty’s Prime Minister governs only so long as they have the confidence of the house. A government is only sworn in after they can prove they have confidence (via a vote of all newly elected and returning MPs). When no party has an absolute majority, things can get tense – or can go right back to the polls. We’ve observed two tense confidence votes this year, one in BC, the other in the UK.
In both these cases, no party had a clear majority of seats in the house (in Canada, we call this a minority government). In both BC and the UK, confidence was secured when a large party enlisted the help of a smaller party to provide “confidence and supply”. In this situation, the small party will vote with the government on budgets and other confidence motions, but is otherwise free to vote however they want.
The first vote of confidence isn’t the only one a government is likely to face. If the opposition thinks the government is doing a poor job, they can launch a vote of no confidence. If the motion is passed by parliament, it is dissolved for an election.
But many bills are actually confidence motions in disguise. Budgets are the “supply” side of “confidence and supply”. Losing a budget vote – sometimes archaically called “failing to secure supply” – results in parliament being dissolved for an election. This is how Ontario’s last election was called. The governing party put forward a budget they were prepared to campaign on and the opposition voted it down.
This feature prevents government shutdowns. If the government can’t agree on a budget, it has to go to the people. If time is of the essence, the Queen or her representative may ask the party that torpedoed the budget to pass a non-partisan continuing funding resolution, good until just after the election to ensure the government continues to function (as happened in Australia in 1975).
By convention, votes on major legislative promises are also motions of confidence. This helps ensure that the priorities laid out during an election campaign don’t get dropped. In a minority government situation, the opposition must decide whether it is worth another election before vetoing any of the government’s key legislative proposals. Because of this, Commonwealth governments can be surprisingly functional even without a legislative majority.
Add all of this together and you get very accountable parties. Try and enact unpopular legislation with anything less than a majority government and you’ll probably find yourself shortly facing voters. On the flip side, obstruct popular legislation and you’ll also find yourself facing voters. Imagine how the last bit of Obama’s term would have been different if the GOP had to fight an election because of the government shutdown.
3. The upper house is totally different
Many Westminster countries have bicameral legislatures, with two chambers making up parliament (New Zealand is the notable exception here). In most Westminster system countries with two chambers, the relationship between the houses is different than that in America.
The two American chambers are essentially co-equal (although the senate gets to approve treaties and budgets must originate in the house). This is not so in the Westminster system. While both chambers have equal powers in many on paper (except that money bills must often originate in the lower chamber), in practice they are very different.
By convention (and occasionally legislation) the upper chamber has its power constrained. The actual restrictions vary from country to country, but in general they forbid rejecting bills for purely partisan reasons or they prevent the upper house from messing with the budget.
The goal of the upper house in the Westminster system is to take a longer view of legislation and protect the nation from short-sighted thinking. This role is more consultative than legislative; it’s not uncommon to see a bill vetoed once, then returned to the upper chamber and assented to (sometimes with token changes, sometimes even with no changes). The upper house isn’t there to ignore the will of the people (as embodied by the lower house), just to remind them to occasionally look longer term.
This sort of system helps prevent legislate gridlock. Since the upper house tends to serve longer terms (in Canada, senators are appointed for life, for example), there is often a different majority in the upper and lower chambers. If the upper chamber was free to veto anything they didn’t like (even if the reasons were purely partisan) then nothing would ever get done.
Taken together, these features of the Westminster system prevent legislative gridlock and produce legitimate outputs of the political process. This obviates populist “I’ll fix everything myself” leaders like Trump, who seem to be an almost inevitable outcome in a perpetually gridlocked and unnavigable system (i.e. the American government).
Among certain people in Canada, electoral and senate reform have become contentious topics. It’s my (unpopular in millennial circles) opinion that Canada has no need of electoral reform. Get a few beers in most proponents of electoral reform and you’ll quickly find that preventing all future Conservative majorities is a much more important goal for them than any abstract concept of “fairness”. I’m not of the opinion that we should change our electoral system just because a party we didn’t like won a majority government once in the last eight elections (or three times in the past ten elections and past fifteen elections).
Senate reform may have already been accomplished, with Prime Minister Trudeau’s move to appoint only non-partisan senators and dissolve the Liberal caucus in the senate. Time will tell if this new system survives his tenure as prime minister.
In one of the articles I linked above, Prof. Joseph Heath compares the utter futility Americans feel about changing their electoral system with the indifference most Canadians feel about changing theirs. In Canada, many proponents of electoral reform specifically wanted to avoid a plebiscite, because they understand that there currently exists no legitimacy crisis sufficient to overcome the status quo bias most people feel. Reform in Canada is certainly possible, but first the system needs to be broken. Right now, the Westminster system is working admirably.
 Israel took many cues from Westminster governments. Its president is non-partisan and ceremonial. If Canada was every forced to give up the monarchy, I’d find this sort of presidential system acceptable. ^
 It’s hard to tell which is less populist; the oldest representative of one of the few remaining aristocracies, or (like in Israel or the governor-generals of the former colonies), exceptional citizens chosen for their reliability and loyalty to the current political order. ^
Utilitarianism for and against is an interesting little book. It’s comprised of back-to-back ~70 page essays, one in favour of utilitarianism and one opposed. As an overview, it’s hard to beat something like this. You don’t have to rely on one scholar to give you her (ostensibly fair and balanced) opinion; you get two articulate philosophers arguing their side as best they can. Fair and balanced is by necessity left as an exercise to the reader (honestly, it always is; here at least it’s explicit).
I’m going to cover the “for” side first. The “against” side will be in later blog post. Both reviews are going to assume that you have some understanding of utilitarianism. If you don’t, go read my primer. Or be prepared to Google. I should also mention that I have no aspirations of being balanced myself. I’m a utilitarian; I had much more to disagree with on the “against” side than on the “for” side.
Professor J.J.C Smart makes the arguments in favour of utilitarianism. According to his Wikipedia entry, he was known for “outsmarting” his opponents, that is to say, accepting the conclusions of their reductio ad absurdum arguments with nary a shrug. He was, I’ve gathered, not one for moral intuitions. His criticism of rule utilitarianism played a role in its decline and he was influential in raising the next crop of Australian utilitarians, among whom Peter Singer is counted. As near as I can tell, he was one of the more notable defenders of utilitarianism when this volume was published in 1971 (although much of his essay dates back a decade earlier).
Smart is emphatically not a rationalist (in the philosophical sense); he writes no “proof of utilitarianism” and denies that such a proof is even possible. Instead, Smart restricts himself to explaining how utilitarianism is an attractive ethical system for anyone possessed of general benevolence. Well, I’ll say “everyone”. The authors of this volume seem to be labouring under the delusion that only men have ethical dilemmas or the need for ethical systems. Neither one of them manages the ethicist’s coup of realizing that women might be viewed as full people at the remove of half a century from their time of writing (such a coup would perhaps have been strong evidence of the superiority of one philosophy over another).
A lot of Smart’s essay consists of showing how various different types of utilitarianism are all the same under the hood. I’ve termed these “collapses”, although “isomorphisms” might be a better term. There are six collapses in all.
The very first collapse put me to mind of the famous adage about ducks. If it walks like a duck, swims like a duck, and quacks like a duck, it is a duck. By the same token, if someone acts exactly how a utilitarian in their position and with their information would act, then it doesn’t matter if they are a utilitarian or not. From the point of view of an ethical system that cares only about consequences they may as well be.
The next collapse deals with rule utilitarianism and may have a lot to do with its philosophical collapse. Smart points out that if you are avoiding “rule worship”, then you will face a quandary when you could break a rule in such a way as to gain more utility. Rule utilitarians sometimes claim that you just need rules with lots of exceptions and special cases. Smart points out that if you carry this through to its logical conclusion, you really are only left with one rule, the meta-rule of “maximize expected utility”. In this way, rule utilitarianism collapses into act utilitarianism.
Next into the compactor is the difference between ideal and hedonic utilitarians. Briefly, ideal utilitarians hold that some states of mind are inherently valuable (in a utilitarian sense), even if they aren’t particularly pleasant from the inside. “Better Socrates dissatisfied than a fool satisfied” is the rallying cry of ideal utilitarians. Hedonic utilitarians have no terminal values beyond happiness; they would gladly let almost the entirety of the human race wirehead.
Smart claims that while these differences are philosophically large, they are practically much less meaningful. Here Smart introduces the idea of the fecundity of a pleasure. A doctor taking joy (or grim satisfaction) in saving a life is a much more fecund pleasure than a gambler’s excitement at a good throw, because it brings about greater joy once you take into account everyone around the actor. Many of the other pleasures (like writing or other intellectual pursuits) that ideal utilitarians value are similarly fecund. They either lead to abatement of suffering (the intellectual pursuits of scientists) or to many people’s pleasure (the labour of the poet). Taking into account fecundity, it was better for Smart to write this essay than to wirehead himself, because many other people – like me – get to enjoy his writing and have fun thinking over the thorny issues he raises.
Smart could have stood to examine at greater length just why ideal utilitarians value the things they do. I think there’s a decent case to be made that societies figure out ways to value certain (likely fecund) pleasures all on their own, no philosophers required. It is not, I think, that ideal utilitarians have stumbled onto certain higher pleasures that they should coax their societies into valuing. Instead, their societies have inculcated them with a set of valued activities, which, due to cultural evolution, happen to line up well with fecund pleasures. This is why it feels difficult to argue with the list of pleasures ideal utilitarians proffer; it’s not that they’ve stumbled onto deep philosophical truths via reason alone, it’s that we have the same inculcations they do.
Beyond simple fecundity though, there is the fact that the choice between Socrates dissatisfied and a fool satisfied rarely comes up. Smart has a great line about this:
But even the most avid television addict probably enjoys solving practical problems connected with his car, his furniture, or his garden. However unintellectual he might be, he would certainly resist the suggestion the he should, if it were possible, change places with a contented sheep, or even a happy and lively dog.
This boils down to: ‘ideal utilitarians assume they’re a lot better than everyone else, what with their “philosophical pursuits”, but most people don’t want purely mindless pleasures’. Combined, these ideas of fecundity and hidden depths, point to a vanishingly small gap between ideal and hedonistic utilitarians, especially compared to the gap between utilitarians and practitioners of other ethical systems.
After dealing with questions of how highly we should weigh some pleasures, Smart turns to address the idea of some pleasures not counting at all. Take, for example, the pleasure that a sadist takes in torturing a victim. Should we count this pleasure in our utilitarian moral calculus? Smart says yes, for reasons that again boil down to “certain pleasures being viewed as bad are an artifact of culture; no pleasure is intrinsically bad.”
(Note however that this isn’t the same thing as Smart condoning the torture. He would say that the torture is wrong because the pleasure the sadist gains from it cannot make up for the distress of the victim. Given that no one has ever found a real live utility monster, this seems a safe position to take.)
In service of this, Smart presents a thought experiment. Imagine a barren universe inhabited by a single sentient being. This sentient being wrongly believes that there are many other inhabitants of the universe being gruesomely tortured and takes great pleasure in this thought. Would the universe be better if the being didn’t derive pleasure from her misapprehension?
The answer here for both Smart and me is no (although I suspect many might disagree with us). Smart reasons (almost tautologically) that since there is no one for this being to hurt, her predilection for torture can’t hurt anyone. We are rightfully wary of people who unselfconsciously enjoy the thought of innocents being tortured because of what it says about what their hobbies might be. But if they cannot hurt anyone, their obsession is literally harmless. This bleak world would not be better served by its single sentient inhabitant quailing at the thought of the imaginary torture.
Of course, there’s a wide gap between the inhabitant curled up in a ball mourning the torture she wrongly believes to be ongoing and her simple ambivalence to it. It seems plausible that many people could consider her ambivalence preferable, even if they did not wish her to be sad. But imagine then the difference being between her lonely and bored and her satisfied and happy (leaving aside for a moment the torture). It is clear here which is the better universe. Given a way to move from the universe with a single bored being to the one with a single fulfilled being, shouldn’t we take it, given that the shift most literally harms no one?
This brings us to the distinction between intrinsically bad pleasures and extrinsically bad pleasures – the flip side of the intrinsically more valuable states of mind of the ideal utilitarian. Intrinsically bad pleasures are pleasures that for some rationalist or metaphysical reason are just wrong. Their rightness or wrongness must of course be vulnerable to attacks on the underlying logic or theology, but I can hardly embark on a survey of common objections to all the common underpinnings; I haven’t the time. But many people have undertaken those critiques and many will in the future, making a belief in intrinsically bad pleasures a most unstable place to stand.
Extrinsically bad pleasures seem like a much safer proposition (and much more convenient to the utilitarian who wishes to keep their ethical system free of meta-physical or meta-ethical baggage). To say that a pleasure is extrinsically bad is simply to say that to enjoy it causes so much misery that it will practically never be moral to experience it. Similar to how I described ideal utilitarian values as heavily culturally influenced, I can’t help but feel that seeing some pleasures as intrinsically bad has to be the result of some cultural conditioning.
If we can accept that certain pleasures are not intrinsically good or ill, but that many pleasures that are thought of as intrinsically good or ill are thought so because of long cultural experience – positive or negative – with the consequences of seeking them out, then we should see the position of utilitarians who believe that some pleasures cannot be counted in the plus column collapse to approximately the same as those who hold that they can, even if neither accepts the position of the other. The utilitarian who refuses to believe in intrinsically bad pleasures should still condemn most of the same actions as one who does, because she knows that these pleasures will be outweighed by the pains they inflict on others (like the pain of the torture victim overwhelming the joy of the torturer).
There is a further advantage to holding that pleasures cannot be intrinsically wrong. If we accept the post-modernists adage that knowledge is created culturally, we will remember to be skeptical of the universality of our knowledge. That is to say, if you hold a list of intrinsically bad pleasures, it will probably not be an exhaustive list and there may be pleasures whose ill-effects you overlook because you are culturally conditioned to overlook them. A more thoughtful utilitarian who doesn’t take the short-cut of deeming some pleasures intrinsically bad can catch these consequences and correctly advocate against these ultimately wrong actions.
The penultimate collapse is perhaps the least well supported by arguments. In a scant page, Smart addresses the differences between total and average happiness in a most unsatisfactory fashion. He asks which of two universes you might prefer: one with one million happy, healthy people, or one with twice as many people, equally happy and healthy. Both Smart and I feel drawn to the larger universe, but he has no arguments for people who prefer the smaller. Smart skips over the difficulties here with an airy statement of “often the best way to increase the average happiness is to increase the total happiness and vice versa”.
I’m not entirely sure this statement is true. How would one go about proving it?
Certainly, average happiness seems to miss out on the (to me) obvious good that you’d get if you could have twice as many happy people (which is clearly one case where they give different answers), but like Smart, I have trouble coming up with a persuasive argument why that is obviously good.
I do have one important thing myself to say about the difference between average and total happiness. When I imagine a world with more people who are on average less happy than the people that currently exist (but collectively experience a greater total happiness) I feel an internal flinch.
Unfortunately for my moral intuitions, I feel the exact same flinch when I image a world with many fewer people, who are on average transcendentally happy. We can fiddle with the math to make this scenario come out to have greater average and total happiness than the current world. Doesn’t matter. Exact same flinch.
This leads me to believe that my moral intuitions have a strong status quo bias. The presence of a status quo bias in itself isn’t an argument for either total or average utilitarianism, but it is a reminder to be intensely skeptical of our response to thought experiments that involve changing the status quo and even to be wary of the order that options are presented in.
The final collapse Smart introduces is that between regular utilitarians and negative utilitarians. Negative utilitarians believe that only suffering is morally relevant and that the most important moral actions are those that have the consequence of reducing suffering. Smart points out that you can raise both the total and average happiness of a population by reducing suffering and furthermore that there is widespread agreement on what reduces suffering. So Smart expects utilitarians of all kinds (including negative) to primarily focus on reducing suffering anyway. Basically, despite the profound philosophical differences between regular and negative utilitarians, we should expect them to behave equivalently. Which, by the very first collapse (if it walks like a duck…), shows that we can treat them as philosophical equivalents, at least in the present world.
In my experience, this is more or less true. Many of the negative utilitarians I am aware of mainly exercise their ethics by donating 10% of their income to GiveWell’s most effective charities. The regular utilitarians… do the exact same. Quack.
As far as I can tell, Smart goes to all this work to show how many forms of utilitarianism collapse together so that he can present a system that isn’t at war with itself. Being able to portray utilitarianism as a simple, unified system (despite the many ways of doing it) heads off many simple criticisms.
While I doubt many people avoided utilitarianism because there are lingering questions about total versus average happiness, per se, these little things add up. Saying “yes, there are a bunch of little implementation details that aren’t agreed upon” is a bad start to an ethical system, unless you can immediately follow it up with “but here’s fifty pages of why that doesn’t matter and you can just do what comes naturally to you (under the aegis of utilitarianism)”.
Let’s talk a bit about what comes naturally to people outside the context of different forms of utilitarianism. No one, not even Smart, sits down and does utilitarian calculus before making every little decision. For most tasks, we can ignore the ethical considerations (e.g. there is broad, although probably not universal agreement that there aren’t hidden moral dimensions to opening a door). For some others, our instincts are good enough. Should you thank the woman at the grocery store checkout? You probably will automatically, without pausing to consider if it will increase the total (or average) happiness of the world.
Like in the case of thanking random service industry workers, there are a variety of cases where we actually have pretty good rules of thumb. These rules of thumbs serve two purposes. First, they allow us to avoid spending all of our time contemplating if our actions are right or wrong, freeing us to actually act. Second, they protect us from doing bad things out of pettiness or venality. If you have a strong rule of thumb that violence is an inappropriate response to speech you disagree with, you’re less likely to talk yourself into punching an odious speaker in the face when confronted with them.
It’s obviously important to pick the right heuristics. You want to pick the ones that most often lead towards the right outcomes.
I say “heuristics” and “rules of thumbs” because the thing about utilitarians and rules is that they always have to be prepared to break them. Rules exist for the common cases. Utilitarians have to be on guard for the uncommon cases, the ones where breaking a rule leads to greater good overall. Having a “don’t cause people to die” rule is all well and good. But you need to be prepared to break it if you can only stop mass death from a runaway trolley by pushing an appropriately sized person in front of it.
Smart seems to think that utilitarianism only comes up for deliberative actions, where you take the time to think about them and that it shouldn’t necessarily cover your habits. This seems like an abrogation to me. Shouldn’t a clever utilitarian, realizing that she only uses utilitarianism for big decisions spend some time training her reflexes to more often give the correct utilitarian solution, while also training herself to be more careful of her rules of thumb and think ethically more often? Smart gave no indication that he thinks this is the case.
The discussion of rules gives Smart the opportunity to introduce a utilitarian vocabulary. An action is right if it is the one that maximizes expected happiness (crucially, this is a summation across many probabilities and isn’t necessarily the action that will maximize the chance of the happiest outcome) and wrong otherwise. An action is rational if a logical being in possession of all the information you possess would think you to be right if you did it. All other actions are irrational. A rule of thumb, disposition, or action is good if it tends to lead to the right outcomes and bad if it tends to lead to the wrong ones.
This vocabulary becomes important when Smart talks about praise, which he believes is an important utilitarian concern in its own right. Praise increases people’s propensity towards certain actions or dispositions, so Smart believes a utilitarian aught to consider if the world would be better served by more of the same before she praises anything. This leads to Smart suggesting that utilitarians should praise actions that are good or rational even if they aren’t right.
It also implies that utilitarians doing the right thing must be open to criticism if it requires bad actions. One example Smart gives is a utilitarian Frenchman cheating on wartime rationing in 1940s England. The Frenchman knows that the Brits are too patriotic to cheat, so his action (and the actions of the few others that cheat) will probably fall below the threshold for causing any real harm, while making him (and the other cheaters) happier. The calculus comes out positive and the Frenchman believes it to be the right action. Smart acknowledges that this logic is correct, but he points out that by the similar logic, the Frenchman should agree that he must be severely punished if caught, so as to discourage others from doing the same thing.
This actually reminds me of something Hannah Arendt brushed up against in Eichmann in Jerusalem while talking about how the moral constraints on people are different than the ones on states. She gives the example of Soghomon Tehlirian, the Armenian exile who assassinated one of the triumvirate of Turkish generals responsible for the Armenian genocide. Arendt believes that it would have been wrong for the Armenian government to assassinate the general (had one even existed at the time), but that it was right for a private citizen to do the deed, especially given that Tehlirian did not seek to hide his crimes or resist arrest.
From a utilitarian point of view, the argument would go something like this: political assassinations are bad, in that they tend to cause upheaval, chaos, and ultimately suffering. On the other hand, there are some leaders who the world would clearly be better off without, if not to stop their ill deeds in their tracks, then to strike fear and moderation into the hearts of similar leaders.
Were the government of any country to carry out these assassinations, it would undermine the government’s ability to police murder. But when a private individual does the deed and then immediately gives herself up into the waiting arms of justice, the utility of the world is increased. If she has erred in picking her target and no one finds the assassination justified, then she will be promptly punished, disincentivizing copy-cats. If instead, like Tehlirian, she is found not guilty, it will only be because the crimes committed by the leader she assassinated were so brutal and clear that no reasonable person could countenance them. This too sends a signal.
That said, I think Smart takes his distinctions between right and good a bit too far. He cautions against trying to change the non-utilitarian morality of anyone who already tends towards good actions, because this might fail half-way, weakening their morality without instilling a new one. Likewise, he is skeptical of any attempt to change the traditions of a society.
This feels too much like trying to have your cake and eat it too. Utilitarianism can be criticized because it is an evangelical ethical system that gives results far from moral intuitions in some cases. From a utilitarian point of view, it is fairly clearly good to have more utilitarians willing to hoover up these counter-intuitive sources of utility. If all you care about are the ends, you want more people to care about the best ends!
If the best way to achieve utilitarian ends wasn’t through utilitarianism, then we’re left with a self-defeating moral system. In trying to defend utilitarianism from the weak critique that it is pushy and evangelical, both in ways that are repugnant to all who engage in cultural or individual ethical relativism and in ways that are repugnant to some moral intuitions, Smart opens it up to the much stronger critique that it is incoherent!
Smart by turns seems to seek to rescue some commonly held moral truths when they conflict with utilitarianism while rejecting others that seem no less contradictory. I can hardly say that he seems keen to show utilitarianism is in fact in harmony with how people normally act – he clearly isn’t. But he also doesn’t always go all (or even part of) the way in choosing utilitarianism over moral intuitions
Near the end of the book, when talking about a thought experiment introduced by one McCloskey, Smart admits that the only utilitarian action is to frame and execute an innocent man, thereby preventing a riot. McCloskey anticipated him, saying: “But as far as I know, only J.J.C. Smart among the contemporary utilitarians is happy to adopt this ‘solution'”.
Here I must lodge a mild protest. McCloskey’s use of the work ‘happy’ surely makes me look a most reprehensible person. Even in my most utilitarian moods, I am not happy about this consequence of utilitarianism… since any injustice causes misery and so can be justified only as the lesser of two evils, the fewer the situation in which the utilitarian is forced to choose the lesser of two evils, the better he will be pleased.
This is also the man who said (much as I have) that “admittedly utilitarianism does have consequences which are incompatible with the common moral consciousness, but I tended to take the view ‘so much the worse for the common moral consciousness’.”
All this leaves me baffled. Why the strange mixture? Sometimes Smart goes far further than it seems any of his contemporaries would have. Other times, he stops short of what seems to me the truly utilitarian solution.
On the criticism that utilitarianism compels us always in moral action, leaving us no time to relax, he offers two responses. The first is that perhaps people are too unwilling to act and would be better served by being more spurred on. The second is that it may be that relaxing today allows us to do ten times the good tomorrow.
But take this and his support for rules of thumb on one side and his support for executing the innocent man, or long spiel on how a bunch of people wireheading wouldn’t be that bad (a spiel that convinced me, I might add) and I’m left with an unclear overall picture. As an all-is-fine defence of utilitarianism, it doesn’t go far enough. As a bracing lecture about our degenerate non-utilitarian ways, it also doesn’t go far enough.
Leaving, I suppose, the sincere views of a man who pondered utilitarianism for much longer than I have. Chance is the only reason that makes sense. This would imply that sometimes Smart gives a nod to traditional morality because he’s decided it aligns with his utilitarian ethics. Other times, he disagrees. At length. Maybe Smart is a man seeking to rescue what precious moral truths he can from the house fire that is utilitarianism.
Perhaps some of my confusion comes from another confusion, one that seems to have subtly infected many utilitarians. Smart is careful to point out that the atomic belief underlying utilitarianism is general benevolence. Benevolence, note, is not altruism. The individual utilitarian matters just as much – or as little – as everyone else. Utilitarians in Smart’s framework have no obligation to run themselves ragged for another. Trading your happiness for another’s will only ever be an ethically neutral act to the utilitarian.
Or, I suspect, the wrong one. You are best placed to know yourself and best placed to create happiness for yourself. It makes sense to include some sort of bias towards your own happiness to take this into account. Or, if this feels icky to you, you could handle it at the level of probabilities. You are more likely to make yourself happy than someone else (assuming you’ve put some effort towards understanding what makes you happy). If you are 80% likely to make yourself happy for an evening and 60% likely to make someone else happy, your clear utilitarian duty is to yourself.
This is not a suggestion to go become a hermit. Social interactions are very rarely as zero sum as all that. It might be that the best way to make yourself happy is to go help a friend. Or to go to a party with several people you know. But I have seen people risk burnout (and have risked it myself) by assuming it is wrong to take any time for themselves when they have friends in need.
This is all my own thoughts, not Smart’s. For all of his talk of utilitarianism, he offers little advice on how to make it a practically useful system. All too often, Smart retreats to the idea of measuring the total utility of a society or world. This presents a host of problems and begs two important questions.
First, can utility be accurately quantified? Smart tries to show that different ways of measuring utility should be roughly equivalent in qualitative terms, but it is unclear if this follows at a quantitative level. Stability analysis (where you see how sensitive your result is to different starting assumptions) is an important tool for checking the veracity of conclusions in engineering projects. I have a hunch that quantitatively, utilitarian results to many problems will be highly unstable when a variety of forms of utilitarianism are tried.
Second, how should we deal with utility in the future? Smart claims that beyond a certain point we can ignore side effects (as unintended good side effects should cancel out unintended ill side effects; this is especially important when it comes to things like saving lives) but that doesn’t give us any advice on how we can estimate effects.
We are perhaps saved here by the same collapse that aligned normal utilitarians with negative utilitarians. If we cannot quantify joy, we can sure quantify misery. Doctors can tell you just how much quality of life a disease can sap (there are tables for this), not to mention the chances that a disease might end a life outright. We know the rates of absolute poverty, maternal deaths, and malaria prevalence. There is more than enough misery in the world to go around and certainly utilitarians who focus on ending misery do not seem to be at risk of being out an ethical duty any time in the near future.
(If ending misery is important to you, might I suggest donating a fraction of your monthly income to one of GiveWell’s top recommended charities? These are the charities that most effectively use money to reduce suffering. If you care about maximizing your impact, GiveWell is a good way to do it.)
Although speaking of the future, I find it striking how little utilitarianism has changed in the fifty-six years since Smart first wrote his essay. He pauses to comment on the risk of a recursively self-improving AI and talk about the potential future moral battles over factory farming. I’m part of a utilitarian meme group and these are the same topics people joke about every day. It is unclear if these are topics that utilitarianism predisposes people to care about, or if there was some indirect cultural transmission of these concerns over the intervening years.
There are many more gems – and frustrations in Smart’s essay. I can’t cover them all without writing a pale imitation of his words, so I shan’t try any more. As an introduction to the different types of utilitarianism, this essay was better than any other introduction I’ve read, especially because it shows all of the ways that various utilitarian systems fit together.
As a defense of utilitarianism, it is comprehensive and pragmatic. It doesn’t seek to please everyone and doesn’t seek to prove utilitarianism. It lays out the advantages of utilitarianism clearly, in plain language, and shows how the disadvantages are not as great as might be imagined. I can see it being persuasive to anyone considering utilitarianism, although in this it is hampered by its position as the first essay in the collection. Anyone convinced by it must then read through another seventy pages of arguments against utilitarianism, which will perhaps leave them rather less convinced.
As a work of academic philosophy, it’s interesting. There’s almost no meta-ethics or meta-physics here. This is a defense written entirely on its own, without recourse to underlying frameworks that might be separately undermined. Smart’s insistence on laying out his arguments plainly leaves him little room to retreat (except around average vs. total happiness). I’ve always found this a useful type of writing; even when I don’t agree, the ways that I disagree with clearly articulated theses can be illuminating.
It’s a pleasant read. I’ve had mostly good luck reading academic philosophy. This book wasn’t a struggle to wade through and it contained the occasional amusing turn of phrase. Smart is neither dry lecturer nor frothing polemicizer. One is put almost in the mind of a kindly uncle, patiently explaining his way through a complex, but not needlessly complicated subject. I highly recommend reading it and its companion.
It can be hard to grasp that radio waves, deadly radiation, and the light we can see are all the same thing. How can electromagnetic (EM) radiation – photons – sometimes penetrate walls and sometimes not? How can some forms of EM radiation be perfectly safe and others damage our DNA? How can radio waves travel so much further than gamma rays in air, but no further through concrete?
It all comes down to wavelength. But before we get into that, we should at least take a glance at what EM radiation really is.
Electromagnetic radiation takes the form of two orthogonal waves. In one direction, you have an oscillating magnetic field. In the other, an oscillating electric field. Both of these fields are orthogonal to the direction of travel.
These oscillations take a certain amount of time to complete, a time which is calculated by observing the peak value of one of the fields and then measuring how long it takes for the field to return to that value. Luckily, we only need to do this once, because the time an oscillation takes (called the period) will stay the same unless acted on by something external. You can invert the period to get the frequency – the number of times oscillations occur in a second. Frequency uses the unit Hertz, which are just inverted seconds. If something has the frequency 60Hz, it happens 60 times per seconds.
EM radiation has another nifty property: it always travels at the same speed, a speed commonly called “the speed of light”  (even when applied to EM radiation that isn’t light). When you know the speed of an oscillating wave and the amount of time it takes for the wave to oscillate, you can calculate the wavelength. Scientists like to do this because the wavelength gives us a lot of information about how radiation will interact with world. It is common practice to represent wavelength with the Greek letter Lambda (λ).
Put in a more mathy way: if you have an event that occurs with frequency f to something travelling at velocity v, the event will have a spatial periodicity λ (our trusty wavelength) equal to v / f. For example, if you have a sound that oscillates 34Hz (this frequency is equivalent to the lowest C♯ on a standard piano) travelling at 340m/s (the speed of sound in air), it will have a wavelength of (340 m/s)/(34 s-1) = 10m. I’m using sound here so we can use reasonably sized numbers, but the results are equally applicable to light or other forms of EM radiation.
Wavelength and frequency are inversely related to each other. The higher the frequency of something, the smaller its wavelength. The longer the wavelength, the lower the frequency. I’m used to people describing EM radiation in terms of frequency when they’re talking about energy (the quicker something is vibrating, the more energy it has) and wavelength when talking about what it will interact with (the subject of the rest of this post).
With all that background out of the way, we can actually “look” at electromagnetic radiation and understand what we’re seeing.
Wavelength is very important. You know those big TV antennas houses used to have?
Turns out that they’re about the same size as the wavelength of television signals. The antenna on a car? About the same size as the radio waves it picks up. Those big radio telescopes in the desert? Same size as the extrasolar radio waves they hope to pick up.
Even things we don’t normally think of as antennas can act like them. The rod and cone cells in your eyes act as antennas for the light of this very blog post . Chains of protein or water molecules act as antennas for microwave radiation, often with delicious results. The bases in your DNA act as antennas for UV light, often with disastrous results.
These are just a few examples, not an exhaustive list. For something to be able to interact with EM radiation, you just need an appropriately sized system of electrons (or electrical system; the two terms imply each other). You get this system of electrons more or less for free with metal. In a metal, all of the electrons are delocalized, making the whole length of a metal object one big electrical system. This is why the antennas in our phones or on our houses are made of metal. It isn’t just metal that can have this property though. Organic substances can have appropriately sized systems of delocalized electrons via double bonding .
EM radiation can’t really interact with things that aren’t the same size as its wavelength. Interaction with EM radiation takes the form of the electric or magnetic field of a photon altering the electric or magnetic field of the substance being interacted with. This happens much more readily when the fields are approximately similar sizes. When fields are the same size, you get an opportunity for resonance, which dramatically decreases the loss in the interaction. Losses for dissimilar sized electric fields are so high that you can assume (as a first approximation) that they don’t really interact.
In practical terms, this means that a long metal rod might heat up if exposed to a lot of radio waves (wavelengths for radio waves vary from 1mm to 100km; many are a few metres long due to the ease of making antennas in that size) because it has a single electrical system that is the right size to absorb energy from the radio waves. A similarly sized person will not heat up, because there is no single part of them that is a unified electrical system the same size as the radio waves.
Microwaves (wavelengths appropriately micron-sized) might heat up your food, but they won’t damage your DNA (nanometres in width). They’re much larger than individual DNA molecules. Microwaves are no more capable of interacting with your DNA than a giant would be of picking up a single grain of rice. Microwaves can hurt cells or tissues, but they’re incapable of hurting your DNA and leaving the rest of the cell intact. They’re just too big. Because of this, there is no cancer risk from microwave exposure (whatever paranoid hippies might say).
Gamma rays do present a cancer risk. They have a wavelength (about 10 picometres) that is similar in size to electrons. This means that they can be absorbed by the electrons in your DNA, which kick these electrons out of their homes, leading to chemical reactions that change your DNA and can ultimately lead to cancer.
Wavelength explains how gamma rays can penetrate concrete (they’re actually so small that they miss most of the mass of concrete and only occasionally hit electrons and stop) and how radio waves penetrate concrete (they’re so large that you need a large amount of concrete before they’re able to interact with it and be stopped ). Gamma rays are stopped by the air because air contains electrons (albeit sparsely) that they can hit and be stopped by. Radio waves are much too large for this to be a possibility.
When you’re worried about a certain type of EM radiation causing cancer, all you have to do is look at its wavelength. Any wavelength smaller than that of ultraviolet light (about 400nm) is small enough to interact with DNA in a meaningful way. Anything large is unable to really interact with DNA and is therefore safe.
Epistemic Status: Model. Looking at everything as antenna will help you understand why EM radiation interacts with the physical world the way it does, but there is a lot of hidden complexity here. For example, eyes are far from directly analogous to antennas in their mechanism of action, even if they are sized appropriately to be antennas for light. It’s also true that at the extreme ends of photon energy, interactions are based more on energy than on size. I’ve omitted this in order to write something that isn’t entirely caveats, but be aware that it occurs.
 You may have heard that the speed of light changes in different substances. Tables will tell you that the speed of light in water is only about ¾ of the speed of light in air or vacuum and that the speed of light in glass is even slower still. This isn’t technically true. The speed of light is (as far as we know) cosmically invariant – light travels the same speed everywhere in the galaxy. That said, the amount of time light takes to travel between two points can vary based on how many collisions and redirections it is likely to get into between two points. It’s the difference between how long it takes for a pinball to make its way across a pinball table when it hits nothing and how long it takes when it hits every single bumper and obstacle. ^
 This is a first approximation of what is going on. Eyes can be modelled as antennas for the right wavelength of EM radiation, but this ignores a whole lot of chemistry and biophysics. ^
 The smaller the wavelength, the easier it is to find an appropriately sized system of electrons. When your wavelength is the size of a double bond (0.133nm), you’ll be able to interact with anything that has a double bond. Even smaller wavelengths have even more options for interactions – a wavelength that is well sized for an electron will interact with anything that has an electron (approximately everything). ^
 This interaction is actually governed by quantum mechanical tunneling. Whenever a form of EM radiation “tries” to cross a barrier larger than its wavelength, it will be attenuated by the barrier. The equation that describes the probability distribution of a particle (the photons that make up EM radiation are both waves and particles, so we can use particle equations for them) is approximately (I say approximately because I’ve simplified all the constants into a single term, k), which becomes (here I’m using k1 to imply that the constant will be different), the equation for exponential decay, when the energy (to a first approximation, length) of the substance is higher than the energy (read size of wavelength) of the light.
This equation shows that there can be some probability – occasionally even a high probability – of the particle existing on the other side of a barrier. All you need for a particle to traverse a barrier is an appropriately small barrier. ^