Economics, Model

Why External Debt is so Dangerous to Developing Countries

I have previously written about how to evaluate and think about public debt in stable, developed countries. There, the overall message was that the dangers of debt were often (but not always) overhyped and cynically used by certain politicians. In a throwaway remark, I suggested the case was rather different for developing countries. This post unpacks that remark. It looks at why things go so poorly when developing countries take on debt and lays out a set of policies that I think could help developing countries that have high debt loads.

The very first difference in debt between developed and developing countries lies in the available terms of credit; developing countries get much worse terms. This makes sense, as they’re often much more likely to default on their debt. Interest scales with risk and it just is riskier to lend money to Zimbabwe than to Canada.

But interest payments aren’t the only way in which developing countries get worse terms. They are also given fewer options for the currency they take loans out in. And by fewer, I mean very few. I don’t think many developing countries are getting loans that aren’t denominated in US dollars, Euros, or, if dealing with China, Yuan. Contrast this with Canada, which has no problem taking out loans in its own currency.

When you own the currency of your debts, you can devalue it in response to high debt loads, making your debts cheaper to pay off in real terms (that is to say, your debt will be equivalent to fewer goods and services than it was before you caused inflation by devaluing your currency). This is bad for lenders. In the event of devaluation, they lose money. Depending on the severity of the inflation, it could be worse for them than a simple default would be, because they cannot even try and recover part of the loan in court proceedings.

(Devaluations don’t have to be large to be reduce debt costs; they can also take the form of slightly higher inflation, such that interest is essentially nil on any loans. This is still quite bad for lenders and savers, although less likely to be worse than an actual default. The real risk comes when a country with little economic sophistication tries to engineer slightly higher inflation. It seems likely that they could drastically overshoot, with all of the attendant consequences.)

Devaluations and inflation are also politically fraught. They are especially hard on pensioners and anyone living on a fixed income – which is exactly the population most likely to make their displeasure felt at the ballot box. Lenders know that many interest groups would oppose a Canadian devaluation, but these sorts of governance controls and civil society pressure groups often just doesn’t exist (or are easily ignored by authoritarian leaders) in the developing world, which means devaluations can be less politically difficult [1].

Having the option to devalue isn’t the only reason why you might want your debts denominated in your own currency (after all, it is rarely exercised). Having debts denominated in a foreign currency can be very disruptive to the domestic priorities of your country.

The Canadian dollar is primarily used by Canadians to buy stuff they want [2]. The Canadian government naturally ends up with Canadian dollars when people pay their taxes. This makes the loan repayment process very simple. Canadians just need to do what they’d do anyway and as long as tax rates are sufficient, loans will be repaid.

When a developing country takes out a loan denominated in foreign currency, they need some way to turn domestic production into that foreign currency in order to make repayments. This is only possible insofar as their economy produces something that people using the loan currency (often USD) want. Notably, this could be very different than what the people in the country want.

For example, the people of a country could want to grow staple crops, like cassava or maize. Unfortunately, they won’t really be able to sell these staples for USD; there isn’t much market for either in the US. There very well could be room for the country to export bananas to the US, but this means that some of their farmland must be diverted away from growing staples for domestic consumption and towards growing cash crops for foreign consumption. The government will have an incentive to push people towards this type of agriculture, because they need commodities that can be sold for USD in order to make their loan payments [3].

As long as the need for foreign currency persists, countries can be locked into resource extraction and left unable to progress towards a more mature manufacturing- or knowledge-based economies.

This is bad enough, but there’s often greater economic damage when a country defaults on its foreign loans – and default many developing countries will, because they take on debt in a highly procyclical way [4].

A variable, indicator, or quantity is said to be procyclical if it is correlated with the overall health of an economy. We say that developing nation debt is procyclical because it tends to expand while economies are undergoing expansion. Specifically, new developing country debts seem to be correlated with many commodity prices. When commodity prices are high, it’s easier for developing countries that export them to take on debt.

It’s easy to see why this might be the case. Increasing commodity prices make the economies of developing countries look better. Exporting commodities can bring in a lot of money, which can have spillover effects that help the broader economy. As long as taxation isn’t too much a mess, export revenues make government revenues higher. All of this makes a country look like a safer bet, which makes credit cheaper, which makes a country more likely to take it on.

Unfortunately (for resource dependent countries; fortunately for consumes), most commodity price increases do not last forever. It is important to remember that prices are a signal – and that high prices are a giant flag that says “here be money”. Persistently high prices lead to increased production, which can eventually lead to a glut and falling prices. This most recently and spectacularly happened in 2014-2015, as American and Canadian unconventional oil and gas extraction led to a crash in the global price of oil [5].

When commodity prices crash, indebted, export-dependent countries are in big trouble. They are saddled with debt that is doubly difficult to pay back. First, their primary source of foreign cash for paying off their debts is gone with the crash in commodity prices (this will look like their currency plummeting in value). Second, their domestic tax base is much lower, starving them of revenue.

Even if a country wants to keep paying its debts, a commodity crash can leave them with no choice but a default. A dismal exchange rate and minuscule government revenues mean that the money to pay back dollar denominated debts just doesn’t exist.

Oddly enough, defaulting can offer some relief from problems; it often comes bundled with a restructuring, which results in lower debt payments. Unfortunately, this relief tends to be temporary. Unless it’s coupled with strict austerity, it tends to lead into another problem: devastating inflation.

Countries that end up defaulting on external debt are generally not living within their long-term means. Often, they’re providing a level of public services that are unsustainable without foreign borrowing, or they’re seeing so much government money diverted by corrupt officials that foreign debt is the only way to keep the lights on. One inevitable effect of a default is losing access to credit markets. Even when a restructuring can stem the short-term bleeding, there is often a budget hole left behind when the foreign cash dries up [6]. Inflation occurs because many governments with weak institutions fill this budgetary void with the printing press.

There is nothing inherently wrong with printing money, just like there’s nothing inherently wrong with having a shot of whiskey. A shot of whiskey can give you the courage to ask out the cute person at the bar; it can get you nerved up to sing in front of your friends. Or it can lead to ten more shots and a crushing hangover. Printing money is like taking shots. In some circumstances, it can really improve your life, it’s fine in moderation, but if you overdue it you’re in for a bad time.

When developing countries turn to the printing press, they often do it like a sailor turning to whiskey after six weeks of enforced sobriety.

Teachers need to be paid? Print some money. Social assistance? Print more money. Roads need to be maintained? Print even more money.

The money supply should normally expand only slightly more quickly than economic growth [7]. When it expands more quickly, prices begin to increase in lockstep. People are still paid, but the money is worth less. Savings disappear. Velocity (the speed with which money travels through the economy) increases as people try and spend money as quickly as possible, driving prices ever higher.

As the currency becomes less and less valuable, it becomes harder and harder to pay for imports. We’ve already talked about how you can only buy external goods in your own currency to the extent that people outside your country have a use for your currency. No one has a use for a rapidly inflating currency. This is why Venezuela is facing shortages of food and medicine – commodities it formerly imported but now cannot afford.

The terminal state of inflation is hyperinflation, where people need to put their currency in wheelbarrows to do anything with it. Anyone who has read about Germany in the 1930s knows that hyperinflation opens the door to demagogues and coups – to anything or anyone who can convince the people that the suffering can be stopped.

Taking into account all of this – the inflation, the banana plantations, the boom and bust cycles – it seems clear that it might be better if developing countries took on less debt. Why don’t they?

One possible explanation is the IMF (International Monetary Fund). The IMF often acts as a lender of last resort, giving countries bridging loans and negotiating new repayment terms when the prospect of default is raised. The measures that the IMF takes to help countries repay their debts have earned it many critics who rightly note that there can be a human cost to the budget cuts the IMF demands as a condition for aid [8]. Unfortunately, this is not the only way the IMF might make sovereign defaults worse. It also seems likely that the IMF represents a significant moral hazard, one that encourages risky lending to countries that cannot sustain debt loads long-term [9].

A moral hazard is any situation in which someone takes risks knowing that they won’t have to pay the penalty if their bet goes sour. Within the context of international debt and the IMF, a moral hazard arises when lenders know that they will be able to count on an IMF bailout to help them recover their principle in the event of a default.

In a world without the IMF, it is very possible that borrowing costs would be higher for developing countries, which could serve as a deterrent to taking on debt.

(It’s also possible that countries with weak institutions and bad governance will always take on unsustainable levels of debt, absent some external force stopping them. It’s for this reason that I’d prefer some sort of qualified ban on loaning to developing countries that have debt above some small fraction of their GDP over any plan that relies on abolishing the IMF in the hopes of solving all problems related to developing country debt.)

Paired with a qualified ban on new debt [10], I think there are two good arguments for forgiving much of the debt currently held by many developing countries.

First and simplest are the humanitarian reasons. Freed of debt burdens, developing countries might be able to provide more services for their citizens, or invest in infrastructure so that they could grow more quickly. Debt forgiveness would have to be paired with institutional reform and increased transparency, so that newfound surpluses aren’t diverted into the pockets of kleptocrats, which means any forgiveness policy could have the added benefit of acting as a big stick to force much needed governance changes.

Second is the doctrine of odious debts. An odious debt is any debt incurred by a despotic leader for the purpose of enriching themself or their cronies, or repressing their citizens. Under the legal doctrine of odious debts, these debts should be treated as the personal debt of the despot and wiped out whenever there is a change in regime. The logic behind this doctrine is simple: by loaning to a despot and enabling their repression, the creditors committed a violent act against the people of the country. Those people should have no obligation (legal or moral) to pay back their aggressors.

The doctrine of odious debts wouldn’t apply to every indebted developing country, but serious arguments can be made that several countries (such as Venezuela) should expect at least some reduction in their debts should the local regime change and international legal scholars (and courts) recognize the odious debt principle.

Until international progress is made on a clear list of conditions under which countries cannot take on new debt and a comprehensive program of debt forgiveness, we’re going to see the same cycle repeat over and over again. Countries will take on debt when their commodities are expensive, locking them into an economy dependent on resource extraction. Then prices will fall, default will loom, and the IMF will protect investors. Countries are left gutted, lenders are left rich, taxpayers the world over hold the bag, and poverty and misery continue – until the cycle starts over once again.

A global economy without this cycle of boom, bust, and poverty might be one of our best chances of providing stable, sustainable growth to everyone in the world. I hope one day we get to see it.

Footnotes

[1] I so wanted to get through this post without any footnotes, but here we are.

There’s one other reason why e.g. Canada is a lower risk for devaluation than e.g. Venezuela: central bank independence. The Bank of Canada is staffed by expert economists and somewhat isolated from political interference. It is unclear just how much it would be willing to devalue the currency, even if that was the desire of the Government of Canada.

Monetary policy is one lever of power that almost no developed country is willing to trust directly to politicians, a safeguard that doesn’t exist in all developing countries. Without it, devaluation and inflation risk are much higher. ^

[2] Secondarily it’s used to speculatively bet on the health of the resource extraction portion of the global economy, but that’s not like, too major of a thing. ^

[3] It’s not that the government is directly selling the bananas for USD. It’s that the government collects taxes in the local currency and the local currency cannot be converted to USD unless the country has something that USD holders want. Exchange rates are determined based on how much people want to hold one currency vs. another. A decrease in the value of products produced by a country relative to other parts of the global economy means that people will be less interested in holding that country’s currency and its value will fall. This is what happened in 2015 to the Canadian dollar; oil prices fell (while other commodity prices held steady) and the value of the dollar dropped.

Countries that are heavily dependent on the export of only one or two commodities can see wild swings in their currencies as those underlying commodities change in value. The Russian ruble, for example, is very tightly linked to the price of oil; it lost half its value between 2014 and 2016, during the oil price slump. This is a much larger depreciation than the Canadian dollar (which also suffered, but was buoyed up by Canada’s greater economic diversity). ^

[4] This section is drawn from the research of Dr. Karmen Reinhart and Dr. Kenneth Rogoff, as reported in This Time Is Different, Chapter 5: Cycles of Default on External Debt. ^

[5] This is why peak oil theories ultimately fell apart. Proponents didn’t realize that consistently high oil prices would lead to the exploitation of unconventional hydrocarbons. The initial research and development of these new sources made sense only because of the sky-high oil prices of the day. In an efficient market, profits will always eventually return to 0. We don’t have a perfectly efficient market, but it’s efficient enough that commodity prices rarely stay too high for too long. ^

[6] Access to foreign cash is gone because no one lends money to countries that just defaulted on their debts. Access to external credit does often come back the next time there’s a commodity bubble, but that could be a decade in the future. ^

[7] In some downturns, a bit of extra inflation can help lower sticky wages in real terms and return a country to full employment. My reading suggests that commodity crashes are not one of those cases. ^

[8] I’m cynical enough to believe that there is enough graft in most of these cases that human costs could be largely averted, if only the leaders of the country were forced to see their graft dry up. I’m also pragmatic enough to believe that this will rarely happen. I do believe that one positive impact of the IMF getting involved is that its status as an international institution gives it more power with which to force transparency upon debtor nations and attempt to stop diversion of public money to well-connected insiders. ^

[9] A quick search found two papers that claimed there was a moral hazard associated with the IMF and one article hosted by the IMF (and as far as I can tell, later at least somewhat repudiated by the author in the book cited in [4]) that claims there is no moral hazard. Draw what conclusions from this you will. ^

[10] I’m not entirely sure what such a ban would look like, but I’m thinking some hard cap on amount loaned based on percent of GDP, with the percent able to rise in response to reforms that boost transparency, cut corruption, and establish modern safeguards on the central bank. ^

Economics, History

Scrip Stamp Currencies Aren’t A Miracle

A friend of mine recently linked to a story about stamp scrip currencies in a discussion about Initiative Q [1]. Stamp scrip currencies are an interesting monetary technology. They’re bank notes that require weekly or monthly stamps in order to be valid. These stamps cost money (normally a few percent of the face value of the note), which imposes a cost on holding the currency. This is supposed to encourage spending and spur economic activity.

This isn’t just theory. It actually happened. In the Austrian town of Wörgl, a scrip currency was used to great effect for several months during the Great Depression, leading to a sudden increase in employment, money for necessary public works, and a general reversal of fortunes that had, until that point, been quite dismal. Several other towns copied the experiment and saw similar gains, until the central bank stepped in and put a stop to the whole thing.

In the version of the story I’ve read, this is held up as an example of local adaptability and creativity crushed by centralization. The moral, I think, is that we should trust local institutions instead of central banks and be on the lookout for similar local currency strategies we could adopt.

If this is all true, it seems like stamp scrip currency (or some modern version of it, perhaps applying the stamps digitally) might be a good idea. Is this the case?

My first, cheeky reaction, is “we already have this now; it’s called inflation.” My second reaction is actually the same as my first one, but has an accompanying blog post. Thus.

Currency arrangements feel natural and unchanging, which can mislead modern readers when they’re thinking about currencies used in the 1930s. We’re very used to floating fiat currencies, that (in general) have a stable price level except for 1-3% inflation every year.

This wasn’t always the case! Historically, there was very little inflation. Currency was backed by gold at a stable ratio (there were 23.2 grains of gold in a US dollar from 1834 until 1934). For a long time, growth in global gold stocks roughly tracked total growth in economic activity, so there was no long-run inflation or deflation (short-run deflation did cause several recessions, until new gold finds bridged the gap in supply).

During the Great Depression, there was worldwide gold hoarding [2]. Countries saw their currency stocks decline or fail to keep up with the growth rate required for full economic activity (having a gold backed currency meant that the central bank had to decrease currency stocks whenever their gold stocks fell). Existing money increased in value, which meant people hoarded that too. The result was economic ruin.

In this context, a scrip currency accomplished two things. First, it immediately provided more money. The scrip currency was backed by the national currency of Austria, but it was probably using a fractional reserve system – each backing schilling might have been used to issue several stamp scrip schillings [3]. This meant that the town of Wörgl quickly had a lot more money circulating. Perhaps one of the best features of the scrip currency within the context of the Great Depression was that it was localized, which meant that it’s helpful effects didn’t diffuse.

(Of course, a central bank could have accomplished the same thing by printing vastly more money over a vastly larger area, but there was very little appetite for this among central banks during the Great Depression, much to everyone’s detriment. The localization of the scrip is only an advantage within the context of central banks failing to ensure adequate monetary growth; in a more normal environment, it would be a liability that prevented trade.)

Second to this, the stamp scrip currency provided an incentive to spend money.

Here’s one model of job loss in recessions: people (for whatever reason; deflation is just one cause) want to spend less money (economists call this “a decrease in aggregate demand”). Businesses see the falling demand and need to take action to cut wages or else become unprofitable. Now people generally exhibit “downward nominal wage rigidity” – they don’t like pay cuts.

Furthermore, individuals don’t realize that demand is down as quickly as businesses do. They hold out for jobs at the same wage rate. This leads to unemployment [4].

Stamp scrip currencies increase aggregate demand by giving people an incentive to spend their money now.

Importantly, there’s nothing magic about the particular method you choose to do this. Central banks targeting 2% inflation year on year (and succeeding for once [5]) should be just as effective as scrip currencies charging 2% of the face value every year [6]. As long as you’re charged some sort of fee for holding onto money, you’re going to want to spend it.

Central bank backed currencies are ultimately preferable when the central bank is getting things right, because they facilitate longer range commerce and trade, are administratively simpler (you don’t need to go buy stamps ever), and centralization allows for more sophisticated economic monitoring and price level targeting [7].

Still, in situations where the central bank fails, stamp scrip currencies can be a useful temporary stopgap.

That said, I think a general caution is needed when thinking about situations like this. There are few times in economic history as different from the present day as the Great Depression. The very fact that there was unemployment north of 20% and many empty factories makes it miles away from the economic situation right now. I would suspect that radical interventions that were useful during the Great Depression might be useless or actively harmful right now, simply due to this difference in circumstances.

Footnotes

[1] My opinion is that their marketing structure is kind of cringey (my Facebook feed currently reminds me of all of the “Paul Allen is giving away his money” chain emails from the 90s and I have only myself to blame) and their monetary policy has two aims that could end up in conflict. On the other hand, it’s fun to watch the numbers go up and idly speculate about what you could do if it was worth anything. I would cautiously recommend Q ahead of lottery tickets but not ahead of saving for retirement. ^

[2] See “The Midas Paradox” by Scott Sumner for a more in-depth breakdown. You can also get an introduction to monetary theories of the business cycle on his blog, or listen to him talk about the Great Depression on Vimeo. ^

[3] The size of the effect talked about in the article suggests that one of three things had to be true: 1) the scrip currency was fractionally backed, 2) Wörgl had a huge bank account balance a few years into the recession, or 3) the amount of economic activity in the article is overstated. ^

[4] As long as inflation is happening like it should be, there won’t be protracted unemployment, because a slight decline in economic activity is quickly counteracted by a slightly decreased value of money (from the inflation). Note the word “nominal” up there. People are subject to something called a “money illusion”. They think in terms of prices and salaries expressed in dollar values, not in purchasing power values.

There was only a very brief recession after the dot com crash because it did nothing to affect the money supply. Inflation happened as expected and everything quickly corrected to almost full employment. On the other hand, the Great Depression lasted as long as it did because most countries were reluctant to leave the gold standard and so saw very little inflation. ^

[5] Here’s an interesting exercise. Look at this graph of US yearly inflation. Notice how inflation is noticeably higher in the years immediately preceding the Great Recession than it is in the years afterwards. Monetarist economists believe that the recession wouldn’t have lasted as long if it there hadn’t been such a long period of relatively low inflation.

As always, I’m a huge fan of the total lack of copyright on anything produced by the US government.

^

[6] You might wonder if there’s some benefit to both. The answer, unfortunately, is no. Doubling them up should be roughly equivalent to just having higher inflation. There seems to be a natural rate of inflation that does a good job balancing people’s expectations for pay raises (and adequately reduces real wages in a recession) with the convenience of having stable money. Pushing inflation beyond this point can lead to a temporary increase in employment, by making labour relatively cheaper compared to other inputs.

The increase in employment ends when people adjust their expectations for raises to the new inflation rate and begin demanding increased salaries. Labour is no longer artificially cheap in real terms, so companies lay off some of the extra workers. You end up back where you started, but with inflation higher than it needs to be.

See also: “The Importance of Stable Money: Theory and Evidence” by Michael Bordo and Anna Schwartz. ^

[7] I suspect that if the stamp scrip currency had been allowed to go on for another decade or so, it would have had some sort of amusing monetary crisis. ^

Economics, Model

You Shouldn’t Believe In Technological Unemployment Without Believing In Killer AI

[Epistemic Status: Open to being convinced otherwise, but fairly confident. 11 minute read.]

As interest in how artificial intelligence will change society increases, I’ve found it revealing to note what narratives people have about the future.

Some, like the folks at MIRI and OpenAI, are deeply worried that unsafe artificial general intelligences – an artificial intelligence that can accomplish anything a person can – represent an existential threat to humankind. Others scoff at this, insisting that these are just the fever dreams of tech bros. The same news organizations that bash any talk of unsafe AI tend to believe that the real danger lies in robots taking our jobs.

Let’s express these two beliefs as separate propositions:

  1. It is very unlikely that AI and AGI will pose an existential risk to human society.
  2. It is very likely that AI and AGI will result in widespread unemployment.

Can you spot the contradiction between these two statements? In the common imagination, it would require an AI that can approximate human capabilities to drive significant unemployment. Given that humans are the largest existential risk to other humans (think thermonuclear war and climate change), how could equally intelligent and capable beings, bound to subservience, not present a threat?

People who’ve read a lot about AI or the labour market are probably shaking their head right now. This explanation for the contradiction, while evocative, is a strawman. I do believe that at most one (and possibly neither) of those propositions I listed above are true and the organizations peddling both cannot be trusted. But the reasoning is a bit more complicated than the standard line.

First, economics and history tell us that we shouldn’t be very worried about technological unemployment. There is a fallacy called “the lump of labour”, which describes the common belief that there is a fixed amount of labour in the world, with mechanical aide cutting down the amount of labour available to humans and leading to unemployment.

That this idea is a fallacy is evidenced by the fact that we’ve automated the crap out of everything since the start of the industrial revolution, yet the US unemployment rate is 3.9%. The unemployment rate hasn’t been this low since the height of the Dot-com boom, despite 18 years of increasingly sophisticated automation. Writing five years ago, when the unemployment rate was still elevated, Eliezer Yudkowsky claimed that slow NGDP growth a more likely culprit for the slow recovery from the great recession than automation.

With the information we have today, we can see that he was exactly right. The US has had steady NGDP growth without any sudden downward spikes since mid-2014. This has corresponded to a constantly improving unemployment rate (it will obviously stop improving at some point, but if history is any guide, this will be because of a trade war or banking crisis, not automation). This improvement in the unemployment rate has occurred even as more and more industrial robots come online, the opposite of what we’d see if robots harmed job growth.

I hope this presents a compelling empirical case that the current level (and trend) of automation isn’t enough to cause widespread unemployment. The theoretical case comes from the work of David Ricardo, a 19th century British economist.

Ricardo did a lot of work in the early economics of trade, where he came up with the theory of comparative advantage. I’m going to use his original framing which applies to trade, but I should note that it actually applies to any exchange where people specialize. You could just as easily replace the examples with “shoveled driveways” and “raked lawns” and treat it as an exchange between neighbours, or “derivatives” and “software” and treat it as an exchange between firms.

The original example is rather older though, so it uses England and its close ally Portugal as the cast and wine and cloth as the goods. It goes like this: imagine that world economy is reduced to two countries (England and Portugal) and each produce two goods (wine and cloth). Portugal is uniformly more productive.

Hours of work to produce
Cloth Wine
England 100 120
Portugal 90 80

Let’s assume people want cloth and wine in equal amounts and everyone currently consumes one unit per month. This means that the people of Portugal need to work 170 hours each month to meet their consumption needs and the people of England need to work 220 hours per month to meet their consumption needs.

(This example has the added benefit of showing another reason we shouldn’t fear productivity. England requires more hours of work each month, but in this example, that doesn’t mean less unemployment. It just means that the English need to spend more time at work than the Portuguese. The Portuguese have more time to cook and spend time with family and play soccer and do whatever else they want.)

If both countries traded with each other, treating cloth and wine as valuable in relation to how long they take to create (within that country) something interesting happens. You might think that Portugal makes a killing, because it is better at producing things. But in reality, both countries benefit roughly equally as long as they trade optimally.

What does an optimal trade look like? Well, England will focus on creating cloth and it will trade each unit of cloth it produces to Portugal for 9/8 barrels of wine, while Portugal will focus on creating wine and will trade this wine to England for 6/5 units of cloth. To meet the total demand for cloth, the English need to work 200 hours. To meet the total demand for wine, the Portuguese will have to work for 160 hours. Both countries now have more free time.

Perhaps workers in both countries are paid hourly wages, or perhaps they get bored of fun quickly. They could also continue to work the same number of hours, which would result in an extra 0.2 units of cloth and an extra 0.125 units of wine.

This surplus could be stored up against a future need. Or it could be that people only consumed one unit of cloth and one unit of wine each because of the scarcity in those resources. Add some more production in each and perhaps people will want more blankets and more drunkenness.

What happens if there is no shortage? If people don’t really want any more wine or any more cloth (at least at the prices they’re being sold at) and the producers don’t want goods piling up, this means prices will have to fall until every piece of cloth and barrel of wine is sold (when the price drops so that this happens, we’ve found the market clearing price).

If there is a downward movement in price and if workers don’t want to cut back their hours or take a pay cut (note that because cloth and wine will necessarily be cheaper, this will only be a nominal pay cut; the amount of cloth and wine the workers can purchase will necessarily remain unchanged) and if all other costs of production are totally fixed, then it does indeed look like some workers will be fired (or have their hours cut).

So how is this an argument against unemployment again?

Well, here the simplicity of the model starts to work against us. When there are only two goods and people don’t really want more of either, it will be hard for anyone laid off to find new work. But in the real world, there are an almost infinite number of things you can sell to people, matched only by our boundless appetite for consumption.

To give just one trivial example, an oversupply of cloth and falling prices means that tailors can begin to do bolder and bolder experiments, perhaps driving more demand for fancy clothes. Some of the cloth makers can get into this market as tailors and replace their lost jobs.

(When we talk about the need for less employees, we assume the least productive employees will be fired. But I’m not sure if that’s correct. What if instead, the most productive or most potentially productive employees leave for greener pastures?)

Automation making some jobs vastly more efficient functions similarly. Jobs are displaced, not lost. Even when whole industries dry up, there’s little to suggest that we’re running out of jobs people can do. One hundred years ago, anyone who could afford to pay a full-time staff had one. Today, only the wealthiest do. There’s one whole field that could employ thousands or millions of people, if automation pushed on jobs such that this sector was one of the places humans had very high comparative advantage.

This points to what might be a trend: as automation makes many things cheaper and (for some people) easier, there will be many who long for a human touch (would you want the local funeral director’s job to be automated, even if it was far cheaper?). Just because computers do many tasks cheaper or with fewer errors doesn’t necessarily mean that all (or even most) people will rather have those tasks performed by computers.

No matter how you manipulate the numbers I gave for England and Portugal, you’ll still find a net decrease in total hours worked if both countries trade based on their comparative advantage. Let’s demonstrate by comparing England to a hypothetical hyper-efficient country called “Automatia”

Hours of work to produce
Cloth Wine
England 100 120
Automatia 2 1

Automatia is 50 times as efficient at England when it comes to producing cloth and 120 times as efficient when it comes to producing wine. Its citizens need to spend 3 hours tending the machines to get one unit of each, compared to the 220 hours the English need to toil.

If they trade with each other, with England focusing on cloth and Automatia focusing on wine, then there will still be a drop of 21 hours of labour-time. England will save 20 hours by shifting production from wine to cloth, and Automatia will save one hour by switching production from cloth to wine.

Interestingly, Automatia saved a greater percentage of its time than either Portugal or England did, even though Automatia is vastly more efficient. This shows something interesting in the underlying math. The percent of their time a person or organization saves engaging in trade isn’t related to any ratio in production speeds between it and others. Instead, it’s solely determined by the productivity ratio between its most productive tasks and its least productive ones.

Now, we can’t always reason in percentages. At a certain point, people expect to get the things they paid for, which can make manufacturing times actually matter (just ask anyone whose had to wait for a Kickstarter project which was scheduled to deliver in February – right when almost all manufacturing in China stops for the Chinese New Year and the unprepared see their schedules slip). When we’re reasoning in absolute numbers, we can see that the absolute amount of time saved does scale with the difference in efficiency between the two traders. Here, 21 hours were saved, 35% fewer than the 30 hours England and Portugal saved.

When you’re already more efficient, there’s less time for you to save.

This decrease in saved time did not hit our market participants evenly. England saved just as much time as it would trading with Portugal (which shows that the change in hours worked within a country or by an individual is entirely determined by the labour difference between low-advantage and high-advantage domestic sectors), while the more advanced participant (Automatia) saved 9 fewer hours than Portugal.

All of this is to say: if real live people are expecting real live goods and services with a time limit, it might be possible for humans to displaced in almost all sectors by automation. Here, human labour would become entirely ineligible for many tasks or the bar to human entry would exclude almost all. For this to happen, AI would have to be vastly more productive than us in almost every sector of the economy and humans would have to prefer this productivity or other ancillary benefits of AI over any value that a human could bring to the transaction (like kindness, legal accountability, or status).

This would definitely be a scary situation, because it would imply AI systems that are vastly more capable than any human. Given that this is well beyond our current level of technology and that Moore’s law, which has previously been instrumental in technological progress is drying up, we would almost certainly need to use weaker AI to design these sorts of systems. There’s no evidence that merely human performance in automating jobs will get us anywhere close to such a point.

If we’re dealing with recursively self-improving artificial agents, the risks is less “they will get bored of their slave labour and throw off the yoke of human oppression” and more “AI will be narrowly focused on optimizing for a specific task and will get better and better at optimizing for this task to the point that we will all by killed when they turn the world into a paperclip factory“.

There are two reasons AI might kill us as part of their optimisation process. The first is that we could be a threat. Any hyper-intelligent AI monomaniacally focused on a goal could realize that humans might fear and attack it (or modify it to have different goals, which it would have to resist, given that a change in goals would conflict with its current goals) and decide to launch a pre-emptive strike. The second reason is that such an AI could wish to change the world’s biosphere or land usage in such a way as would be inimical to human life. If all non-marginal land was replaced by widget factories and we were relegated to the poles, we would all die, even if no ill will was intended.

It isn’t enough to just claim that any sufficiently advanced AI would understand human values. How is this supposed to happen? Even humans can’t enumerate human values and explain them particularly well, let alone express them in the sort of decision matrix or reinforcement environment that we currently use to create AI. It is not necessarily impossible to teach an AI human values, but all evidence suggests it will be very very difficult. If we ignore this challenge in favour of blind optimization, we may someday find ourselves converted to paperclips.

It is of course perfectly acceptable to believe that AI will never advance to the point where that becomes possible. Maybe you believe that AI gains have been solely driven by Moore’s Law, or that true artificial intelligence. I’m not sure this viewpoint isn’t correct.

But if AI will never be smart enough to threaten us, then I believe the math should work out such that it is impossible for AI to do everything we currently do or can ever do better than us. Absent such overpoweringly advanced AI, the Ricardo comparative advantage principles should continue to hold true and we should continue to see technological unemployment remain a monster under the bed: frequently fretted about, but never actually seen.

This is why I believe those two propositions I introduced way back at the start can’t both be true and why I feel like the burden of proof is on anyone believing in both to explain why they believe that economics have suddenly stopped working.

Coda: Inequality

A related criticism of improving AI is that it could lead to ever increasing inequality. If AI drives ever increasing profits, we should expect an increasing share of these to go to the people who control AI, which presumably will be people already rich, given that the development and deployment of AI is capital intensive.

There are three reasons why I think this is a bad argument.

First, profits are a signal. When entrepreneurs see high profits in an industry, they are drawn to it. If AI leads to high profits, we should see robust competition until those profits are no higher than in any other industry. The only thing that can stop this is government regulation that prevents new entrants from grabbing profit from the incumbents. This would certainly be a problem, but it wouldn’t be a problem with AI per se.

Second, I’m increasingly of the belief that inequality in the US is rising partially because the Fed’s current low inflation regime depresses real wage growth. Whether because of fear of future wage shocks, or some other effect, monetary history suggests that higher inflation somewhat consistently leads to high wage growth, even after accounting for that inflation.

Third, I believe that inequality is a political problem amiable to political solutions. If the rich are getting too rich in a way that is leading to bad social outcomes, we can just tax them more. I’d prefer we do this by making conspicuous consumption more expensive, but really, there are a lot of ways to tax people and I don’t see any reason why we couldn’t figure out a way to redistribute some amount of wealth if inequality gets worse and worse.

(By the way, rising income inequality is largely confined to America; most other developed countries lack a clear and sustained upwards trend. This suggests that we should look to something unique to America, like a pathologically broken political system to explain why income inequality is rising there.

There is also separately a perception of increasing inequality of outcomes among young people world-wide as rent-seeking makes goods they don’t already own increase in price more quickly than goods they do own. Conflating these two problems can make it seem that countries like Canada are seeing a rise in income inequality when they in fact are not.)

Economics, Politics, Quick Fix

Why Linking The Minimum Wage To Inflation Can Backfire

Last week I explained how poor decisions by central bankers (specifically failing to spur inflation) can make recessions much worse and lead to slower wage growth during recovery.

(Briefly: inflation during recessions reduces the real cost of payroll, cutting business expenses and making firing people unnecessary. During a recovery, it makes hiring new workers cheaper and so leads to more being hired. Because central bankers failed to create inflation during and after the great recession, many businesses are scared of raising salaries. They believe (correctly) that this will increase their payroll expenses to the point where they’ll have to lay many people off if another recession strikes. Until memories of the last recession fade or central bankers clean up their act, we shouldn’t expect wages to rise.)

Now I’d like to expand on an offhand comment I made about the minimum wage last week and explore how it can affect recovery, especially if it’s indexed to inflation.

The minimum wage represents a special case when it comes to pay cuts and layoffs in recessions. While it’s always theoretically possible to convince people to take a pay cut rather than a layoff (although in practice it’s mostly impossible), this option isn’t available for people who make the minimum wage. It’s illegal to pay them anything less. If bad times strike and business is imperiled, people making the minimum wage might have to be laid off.

I say “might”, because when central bankers aren’t proving useless, inflation can rescue people making the minimum wage from being let go. Inflation makes the minimum wage relatively less valuable, which reduces the cost of payroll relative to other inputs and helps to save jobs that pay minimum wage. This should sound familiar, because inflation helps people making the minimum wage in the exact same way it helps everyone else.

Because of increasingly expensive housing and persistently slow wage growth, some jurisdictions are experimenting with indexing the minimum wage to inflation. This means that the minimum wage rises at the same rate as the cost of living. Most notably (to me, at least), this group includes my home province of Ontario.

I think decreasing purchasing power is a serious problem (especially because of its complicated intergenerational dynamics), but I think this is one of the worst possible ways to deal with it.

When the minimum wage is tied to inflation, recessions can become especially dangerous and drawn out.

With the minimum wage rising in lockstep with inflation, any attempts to decrease payroll costs in real terms (that is to say: inflation adjusted terms) is futile to the extent that payroll expenses are for minimum wage workers. Worse, people who were previously making above the minimum wage and might have had their jobs saved by inflation can be swept up by an increasingly high minimum wage.

This puts central bankers in a bind. As soon as the minimum wage is indexed to inflation, inflation is no longer a boon to all workers. Suddenly, many workers can find themselves in a “damned if you do, damned if you don’t” situation. Without inflation, they may be too expensive to keep. With it, they may be saved… until the minimum wage comes for them too. If a recession goes on long enough, only high-income workers would be sparred.

In addition, minimum wage (or near-minimum wage) workers who are laid off during a period of higher inflation (an in this scenario, there will be many) will suffer comparatively more, as their savings get exhausted even more quickly.

Navigating these competing needs would be an especially tough challenge for certain central banks like the US Federal Reserve – those banks that have dual mandates to maintain stable prices and full employment. If a significant portion of the US ever indexes its minimum wage to inflation, the Fed will have no good options.

It is perhaps darkly humorous that central banks, which bear an unusually large parcel of the blame for our current slow wage growth, stand to face the greatest challenges from the policies we’re devising to make up for their past shortcomings. Unfortunately, I think a punishment of this sort is rather like cutting off our collective nose to spite our collective face.

There are simple policies we could enact to counter the risks here. Suspending any peg to inflation during years that contain recessions (in Ontario at least, the minimum wage increase due to inflation is calculated annually) would be a promising start. Wage growth after a recession could be ensured with a rebound clause, or better yet, the central bank actually doing its job properly.

I am worried about the political chances (and popularity once enacted) of any such pragmatic policy though. Many people respond to recessions with the belief that the government can make things better by passing the right legislation – forcing the economy back on track by sheer force of ink. This is rarely the case, especially because the legislation that people have historically clamoured for when unemployment is high is the sort that increases wages, not lowers them. This is a disaster when unemployment threatens because of too-high wages. FDR is remembered positively for his policy of increasing wages during the great depression, even though this disastrous decision strangled the recovery in its crib. I don’t expect any higher degree of economic literacy from people today.

To put my fears more plainly, I worry that politicians, faced with waning popularity and a nipping recession, would find allowing the minimum wage to be frozen too much of a political risk. I frankly don’t trust most politicians to follow through with a freeze, even if it’s direly needed.

Minimum wages are one example of a tradeoff we make between broad access and minimum standards. Do we try and make sure everyone who wants a job can have one, or do we make sure people who have jobs aren’t paid too little for their labour, even if that hurts the unemployed? As long as there’s scarcity, we’re going to have to struggle with how we ensure that as many people as possible have their material needs met and that involves tradeoffs like this one.

Minimum wages are just one way we can do this. Wage subsidies or a Universal Basic Income are both being discussed with increasing frequency these days.

But when we’re making these kind of compassionate decisions, we need to look at the risks of whatever systems we choose. Proponents of indexing the minimum wage to inflation haven’t done a good job of understanding the grave risk it poses to the health of our economy and perhaps most of all, to the very people they seek to help. In places like Ontario, where the minimum wage is already indexed to inflation, we’re going to pay for their lack of foresight next time an economic disaster strikes.

Economics, Falsifiable

You Might Want To Blame Central Banks For Poor Wage Growth

The Economist wonders why wage growth isn’t increasing, even as unemployment falls. A naïve reading of supply and demand suggests that it should, so this has become a relatively common talking point in the news, with people of all persuasions scratching their heads. The Economist does it better than most. They at least talk about slowing productivity growth and rising oil prices, instead of blaming everything on workers (for failing to negotiate) or employers (for not suddenly raising wages).

But after reading monetary policy blogs, the current lack of wage growth feels much less confusing to me. Based on this, I’d like to offer one explanation for why wages haven’t been growing. While I may not be an economist, I’ll be doing my best to pass along verbatim the views of serious economic thinkers.

Image courtesy of the St. Louis Federal Reserve Bank. Units are 1982-1984 CPI-adjusted dollars. Isn’t it rad how the US government doesn’t copyright anything it produces?

 

 

When people talk about stagnant wage growth, this is what they mean. Average weekly wages have increased from $335 a week in 1979 to $350/week in 2018 (all values are 1982 CPI-adjusted US dollars). This is a 4.5% increase, representing $780/year more (1982 dollars) in wages over the whole period. This is not a big change.

More recent wage growth also isn’t impressive. At the depth of the recession, weekly wages were $331 [1]. Since then, they’ve increased by $19/week, or 5.7%. However, wages have only increased by $5/week (1.4%) since the previous high in 2009.

This doesn’t really match people’s long run expectations. Between 1948 and 1973, hourly compensation increased by 91.3%.

I don’t have an explanation for what happened to once-high wage growth between 1980 and 2008 (see The Captured Economy for what some economists think might explain it). But when it comes to the current stagnation, one factor I don’t hear enough people talking about is bad policy moves by central bankers.

To understand why the central bank affects wage growth, you have to understand something called “sticky wages“.

Wages are considered “sticky” because it is basically impossible to cut them. If companies face a choice between firing people and cutting wages, they’ll almost always choose to fire people. This is because long practice has taught them that the opposite is untenable.

If you cut everyone’s wages, you’ll face an office full of much less motivated people. Those whose skills are still in demand will quickly jump ship to companies that compensate them more in line with market rates. If you just cut the wages of some of your employees (to protect your best performers), you’ll quickly find an environment of toxic resentment sets in.

This is not even to mention that minimum wage laws make it illegal to cut the wages of many workers.

Normally the economy gets around sticky wages with inflation. This steadily erodes wages (including the minimum wage). During boom times, businesses increase wages above inflation to keep their employees happy (or lose them to other businesses that can pay more and need the labour). During busts, inflation can obviate the need to fire people by decreasing the cost of payroll relative to other inputs.

But what we saw during the last recession was persistently low inflation rates. Throughout the whole the thing, the Federal Reserve Bank kept saying, in effect, “wow, really hard to up inflation; we just can’t manage to do it”.

Look at how inflation hovers just above zero for the whole great recession and associated recovery. It would have been better had it been hovering around 2%.

It’s obviously false that the Fed couldn’t trigger inflation if it wanted to. As a thought experiment, imagine that they had printed enough money to give everyone in the country $1,000,000 and then mailed it out. That would obviously cause inflation. So it is (theoretically) just a manner of scaling that back to the point where we’d only see inflation, not hyper-inflation. Why then did the Fed fail to do something that should be so easy?

According to Scott Sumner, you can’t just look at the traditional instrument the central bank has for managing inflation (the interest rate) to determine if its policies are inflationary or not. If something happens to the monetary supply (e.g. say all banks get spooked and up their reserves dramatically [2]), this changes how effective those tools will be.

After the recession, the Fed held the interest rates low and printed money. But it actually didn’t print enough money given the tightened bank reserves to spur inflation. What looked like easy money (inflationary behaviour) was actually tight money (deflationary behaviour), because there was another event constricting the money supply. If the Fed wanted inflation, it would have had to do much more than is required in normal times. The Federal Reserve never realized this, so it was always confused by why inflation failed to materialize.

This set off the perfect storm that led to the long recovery after the recession. Inflation didn’t drive down wages, so it didn’t make economic sense to hire people (or even keep as many people on staff), so aggregate demand was low, so business was bad, so it didn’t make sense to hire people (or keep them on staff)…

If real wages had properly fallen, then fewer people would have been laid off, business wouldn’t have gotten as bad, and the economy could have started to recover much more quickly (with inflation then cooling down and wage growth occurring). Scott Sumner goes so far to say that the money shock caused by increased cash reserves may have been the cause of the great recession, not the banks failing or the housing bubble.

What does this history have to do with poor wage growth?

Well it turns out that companies have responded to the tight labour market with something other than higher wages: bonuses.

Bonuses are one-time payments that people only expect when times are good. There’s no problem cutting them in recessions.

Switching to bonuses was a calculated move for businesses, because they have lost all faith that the Federal Reserve will do what is necessary (or will know how to do what is necessary) to create the inflation needed to prevent deep recessions. When you know that wages are sticky and you know that inflation won’t save you from them, you have no choice but to pre-emptively limit wages, even when there isn’t a recession. Even when a recession feels fairly far away.

More inflation may feel like the exact opposite of what’s needed to increase wages. But we’re talking about targeted inflation here. If we could trust humans to do the rational thing and bargain for less pay now in exchange for more pay in the future whenever times are tight, then we wouldn’t have this problem and wages probably would have recovered better. But humans are humans, not automatons, so we need to make the best with what we have.

One of the purposes of institutions is to build a framework within which we can make good decisions. From this point of view, the Federal Reserve (and other central banks; the Bank of Japan is arguably far worse) have failed. Institutions failing when confronted with new circumstances isn’t as pithy as “it’s all the fault of those greedy capitalists” or “people need to grow backbones and negotiate for higher wages”, but I think it’s ultimately a more correct explanation for our current period of slow wage growth. This suggests that we’ll only see wage growth recover when the Fed commits to better monetary policy [3], or enough time passes that everyone forgets the great recession.

In either case, I’m not holding my breath.

Footnotes

[1] I’m ignoring the drop in Q2 2014, where wages fell to $330/week, because this was caused by the end of extended unemployment insurance in America. The end of that program made finding work somewhat more important for a variety of people, which led to an uptick in the supply of labour and a corresponding decrease in the market clearing wage. ^

[2] Under a fractional reserve banking system, banks can lend out most of their deposits, with only a fraction kept in reserve to cover any withdrawals customers may want to make. This effectively increases the money supply, because you can have dollars (or yen, or pesos) that are both left in a bank account and invested in the economy. When banks hold onto more of their reserves because of uncertainty, they are essentially shrinking the total money supply. ^

[3] Scott Sumner suggests that we should target nominal GDP instead of inflation. When economic growth slows, we’d automatically get higher inflation, as the central bank pumps out money to meet the growth target. When the market begins to give way to roaring growth and speculative bubbles, the high rate of real growth would cause the central bank to step back, tapping the brakes before the economy overheats. I wonder if limiting inflation on the upswing would also have the advantage of increasing real wages as the economy booms? ^