Economics

Ending Bailouts and Recessions: Why the Left should care about monetary economics

When I write about economics on this blog, it is quite often from the perspective of monetary economics. I’ve certainly made no secret about how important monetary economics is to my thinking, but I also have never clearly laid out the arguments that convinced me of monetarism, let alone explained its central theories. This isn’t by design. I’ve found it frustrating that many of my explanations of monetarism are relegated to disjointed footnotes. There’s almost an introduction to monetarism already on this blog, if you’re willing to piece together thirty footnotes on ten different posts.

It is obviously the case that no one wants to do this. Therefore, I’d like to try something else: a succinct explanation of monetary economics, written as clearly as possible and without any simplifying omissions or obfuscations, but free of (unexplained) jargon.

It is my hope that having recently struggled to shove this material into my own head, I’m well positioned to explain it. I especially hope to explain it to people broadly similar to me: people who are vaguely left-leaning and interested in economics as it pertains to public policy, especially people who believe that public policy should have as its principled aim ensuring a comfortable and dignified standard of living for as many as possible (especially those who have traditionally been underserved or abandoned by the government).

To begin, I should define monetarism. Monetarism is the branch of (macro-)economic thought that holds that the supply of money is a key determinant of recessions, depressions, and growth (in whole, the “business cycle”, the pattern of boom and bust that characterizes all market economies that use money).

Why does money matter?

In general, during both periods of growth and recessions, the supply of money increases. However, there have been several periods of time in America where the supply of money has decreased. Between the years of 1867 and 1963, there were eight such periods. They are: 1873-1879, 1892-1894, 1907-1908, 1920-1921, 1929-1933, 1937-1938, 1948-1949, and 1959-1960.

When I first read those dates, I got chills. Those are the dates of every single serious contraction in the covered years.

Men queueing for free soup during the Great Depression
The Great Depression appears twice! Image courtesy Wikimedia Commons.

Furthermore, while minor recessions aren’t characterized by a decrease in the supply of money, they are characterized by a decrease in the rate of the growth of the money supply. That is to saw, the money supply is still increasing, but by less than it normally does.

Let’s pause for a second and talk about the growth of the money supply. Why does it normally grow?

Under the international gold standard, which existed in modern times under one form or another until President Nixon de facto ended it in 1971, money either existed as precious metal coins (specie), or paper banknotes backed by specie. If you had a dollar in your wallet, you could convert it to a set amount of gold.

As long as gold mining was economically viable (it was in the period covering 1867-1963, which we’re talking about), there was, in general, steady growth in the money supply. Each dollar’s worth of gold pulled out of the ground made it possible to expand the monetary supply by a similar amount, although I should note that not all gold that was mined was used this way (some was used, for example, to make jewelry).

Since the end of the gold standard, governments have made a commitment to keeping the money supply steadily increasing. We commonly refer to this as “printing money”, but that’s a bit of an anachronism. Central banks create money by buying assets (like government debt) using money that did not previously exist. This process is digital [1].

(We call currencies that aren’t backed by precious metals or other commodities “fiat” currencies, because their value exists, at least in part, because of government fiat.)

In both fiat and commodity currency regimes, there is a clear correlation between changes in the growth rate of the money supply and the growth rate of the economy. A decrease in money supply growth leads to a recession. An outright decrease in money supply (i.e. negative growth) leads to a depression. Even within the categories (depression and recession), there’s a correlation. The worse the decline in growth rate, the worse the downturn.

Whenever someone provides an interesting correlation, it is important to ask about causation. It does not necessarily need to be the case that a decrease in money supply is what is causing recessions. It could instead be that recessions cause the decrease in the rate of money growth, or that money supply is a lagging indicator of recessions (as unemployment is), rather than a leading one [2].

There are four reasons to suspect that money is in fact the causal factor in business cycles.

First, there is the simple fact that history suggests a causal relationship. We do not see any history of central banks (which remember, help control the money supply) reacting to economic recession with plans to cut the supply of money. On the other hand, we have seen recessions which were started when central banks have deliberately decreased the growth of the money supply, as the Federal Reserve Chairman Paul Volcker did in 1980.

Second, it is possible to do correlational analyses to determine if it is more probable that something is a leading or lagging indicator. Anna Schwartz and Milton Friedman did just such an analysis on data from US recessions and depressions between 1867 and 1963 and found correlation only with money as a leading indicator.

Third, money is much better positioned to explain recessions and depressions than the alternative (Keynesian) theory which holds that recessions occur due to a fall in investment. The correlation between the amount of investment and the amount of economic growth in America (again, between 1867 and 1963) disappears when you control for changes in the money supply. The correlation between money and growth remains, even when controlling for investment.

Fourth, we do not need to be a priori skeptical of money as a key determinant of the business cycle. Money is clearly linked to the economy; it literally permeates it. The business cycle of growth followed by recession is observed only in economies that use money [3]. While it would make sense to be inherently skeptical of a theory that holds that recessions occur when not enough sewing needles are produced, we need to be much less reflexively skeptical of money. Claiming money causes the business cycle isn’t like claiming Nicholas Cage movies cause accidental drowning.

The correlation in this graph is obviously false because there’s no plausible mechanism connecting the two! This graph would be much more plausible if “Nicholas Cage films” was replaced with “New pool installations”. While our hypothetical graph of fatalities vs. installations wouldn’t be conclusive, it would be highly suggestive, in a way this graph just isn’t. Graph concept courtesy of Tyler Vigen, who is kind enough to make all of his spurious correlation graphs free of Copyright.

These arguments are necessarily summaries; this blog post isn’t the best place to put all of the graphs and regression analyses that Schwartz and Friedman did when first formulating their theory of monetary economics. I’ve read through the analysis several times and I believe it to be sound. If you wish to pore over regressions yourself, I recommend the paper Money and Business Cycle (1963).

If you can accept that the supply of money plays a key role in the business cycle, you’ll probably find yourself in possession of several questions, not the least of which will be “how?”. That’s a good question! But before I can explain “how”, I first need to define money, explain how banking works, and delve into the role and abilities of the central bank. It will be worth it, I promise.

What is money?

At first blush, this is a silly question. Money is one of those things we know when we see. It’s the cash in our wallets and the accounts at our banks. Except, it’s not quite that.

Money isn’t a binary category. Things can have varying amounts of “moneyness”, which is to say, can be varyingly good at accomplishing the three functions of money. These three functions are: a store of value (something that can be exchanged for goods in the future), a unit of account (something that you can use to keep track of how many goods you could buy), and a medium of exchange (something that you can give to someone in exchange for goods).

While bank deposits and cash are obviously money, there are also a variety of financial products that we tend to consider money even though they have less moneyness than cash. For example, robo-investment accounts (of the sort that my generation uses) often given the illusion of containing cash by being denominated in dollars and allowing withdrawals [4]. What makes them have less moneyness than cash is only apparent when you look under the hood and realize they contain a mixture of stocks and loans.

In a monetary context, when we say “money”, we aren’t referring to investment accounts or any other instrument that pretends to be cash [5]. Instead, we’re referring to the “money supply”, which is made up of instruments with very high moneyness and is determined by three factors:

  1. The monetary base. This is the money that the central bank issues. We see it as cash, as well as the reserves that regular banks choose to hold.
  2. The amount of reserves banks keep against deposits. Later this will show up as the deposit-reserve ratio, which is calculated by dividing total deposits by the reserves kept on hand by banks.
  3. How much of its currency the public chooses to deposit at banks. This will surface later as the deposit-currency ratio. This is calculated by dividing the value of all deposit accounts at banks by the total amount of currency in circulation.

What are reserves?

When you give your money to a bank, it doesn’t hold all of it in a vault somewhere. Vaults are expensive, as are guards, tellers, and account software. If banks held onto all of your cash for you, you’d have to pay them quite a lot of money for the service. Many of us would decide it’s not worth the bother and keep our cash under the proverbial mattress.

Banks realized this a long time ago. They responded like any good business – by finding a way to cut costs for the consumer.

Banks were able to cut costs by realizing that it is very rare for everyone to want all of their money back at once. If banks didn’t need to keep all of the deposited cash (or, in the olden days, gold and silver specie) on hand, they could lend some of it out and use the interest it earned to subsidize the cost of running the bank.

This led to the birth of the fractional reserve system, so named because bank reserves are a fraction of the money deposited in banks [6].

Once you have a fractional reserve system, a funny thing happens with the money supply: it is no longer made up solely by money created by the central bank. When commercial banks lend out money that people have deposited, they essentially create money. This is how the money supply ends up depending on the deposit-reserve ratio; this ratio describes how much money banks are creating.

When banks decide to lend out more of their reserves, the deposit-reserve ratio increases and the money supply increases. When banks instead decide to lend out less and sit on their cash, the deposit-reserve ratio decreases and the money supply decreases.

But it isn’t just the banks that get a vote in the money supply under a fractional reserve system. Each of us with a bank account also gets a vote. If we trust banks or if we’re enticed by a high interest rate, we hold less cash and put more money in our bank accounts (which causes the deposit-currency ratio – and therefore the money supply – to increase). If we’re instead worried about the stability of banks or if bank accounts aren’t paying very appealing interest rates, we’ll tend to hold onto our cash (decreasing the deposit-currency ratio and the total supply of money).

Holding the deposit-reserve ratio constant, the money supply increases when the deposit-currency ratio increases and decreases when the deposit-currency ratio decreases. This is because every dollar in the bank becomes, via the magic of fractional reserve banking, more than a single dollar in the money supply. Your deposit remains available to you, but most of it is also lend out to someone else.

While we cannot in practice hold any ratio constant, there do exist real constraints on the deposit-reserve ratio. In the US, there are laws that require banks above a certain size to keep liquid reserves equal to at least 10% of their deposits. Many other countries lack reserve requirements per se, but do require banks to limit how leveraged they become, which acts as a de facto limit on their deposit-reserve ratio [7].

It isn’t just the government that provides restraints. Banks may have internal policies that require them to have lower (safer) deposit-reserve ratios the government demands.

Governments and bank risk management departs set limits on the deposit-reserve ratio in an attempt to limit bank failures, which become more likely the higher the deposit-reserve ratio gets. Banks don’t really sit on all of their reserves, or even stuff it in vaults. Instead, they normally use it to buy assets that they and the government agree are safe. Often this takes the form of government bonds, but sometimes other assets are considered suitable. Many of the mortgage backed securities that exploded during the financial crisis were considered suitably safe, which was a major failure of the ratings agencies.

If assets banks have bought to act as their reserves lose value, they can find their deposit-reserve ratio higher than they want it to be, which often results in a sudden decline in loan activity (and therefore a decline in the growth rate of the money supply) as they try to return their financials to normal [8]. Bank failures can occur if deposit-reserve ratios get so far from normal that banks cannot afford to meet normal withdrawal requests.

If people and banks have so much control over the money supply, what do central banks do?

What central banks do depends on their mandate; what the government has told them to do. The US Federal Reserve Bank has a dual mandate: to maintain a stable price level (here defined as inflation of approximately 2%) and to ensure full employment (defined as an unemployment rate of around 4.5% [9]). The Fed is actually a bit of an aberration here. Many central banks (like Canada’s) have a single mandate: “to keep inflation low, predictable, and stable”.

The Federal Reserve building in Washington
All central banks also have an unofficial mandate: have really cool looking headquarters. Image courtesy of Wikimedia Commons.

Currently, central banks achieve their mandate by manipulating interest rates. They do this with a “target rate” and “open market operations”. The target rate is the thing you hear about on TV and in the news. It’s where the central bank would like interest rates to be (here, interest rates really means “the rate at which banks lend each other money”; consumers can generally expect to make less interest on their savings and pay more when they take out loans [10]).

Note that I’ve said the target rate is where the central bank would “like” interest rates to be. It can’t just call up every bank and declare the new interest rate by fiat. Instead, it engages in those “open market operations” that I mentioned. There are two types of open market operations.

When the interest rate is above target, the central bank lends money to banks at below-market interest rates (to increase the supply of money and encourage interest rates to become lower). When the interest rate is below target, the central bank will begin selling assets to banks (to give banks something else to do with their money and thereby make them demand more interest from each other when loaning).

Open market operations are normally fairly successful at keeping the interest rate reasonably close to the target rate.

Unfortunately, the target rate is only moderately effective at achieving monetary policy goals.

Remember, the correlation we identified in the first section is for the total supply of money, not for the interest rate. There’s some correlation between the two (lower interest rates can mean a fast monetary growth rate), but it isn’t exact.

When you hear people on TV say that “low interest rates mean easy money” (“easy money” means variously “high growth in the money supply” or “growth in the money supply likely to cause above-target inflation”) or “high interest rates mean tight money” (a shrinking money supply; below target inflation), you are hearing people who don’t entirely understand what they’re talking about.

The key piece of information reporters often lack is how much demand banks have for money. If banks don’t really want much more money (perhaps because the economy is tanking and there’s nothing to do with money that will justify loan repayments) then a low interest rate can still result in the money supply barely growing. It may be that the central bank target rate is quite low by historical standards (say 1%) but still not low enough to expand the money supply via loans to banks.

Put another way, while a 1% interest rate is always easier than a 2% interest rate, there’s often nothing to tell a priori if it represents easy money, which is to say, growth in the money stock. A 1% target rate can be contractionary (shrink the money stock) if banks won’t take out loans when charged it.

Conversely, a 10% interest rate could conceivably represent easy money if banks are still taking out lots of loans at that rate. Take a case where there’s some asset currently returning 20% every year. Under those circumstances, 10% interest payments are a steal and the money supply would continue to increase. It’s certainly tighter money than a 2% interest rate, but it’s not always tight money.

If you want to see if the target interest rate is inflationary or deflationary, you should look at the market’s expectations for inflation. If the market is predicting higher than target inflation, money is easy. If it’s predicting below target inflation, money is tight.

Central banks often collect statistics so that they can judge the effectiveness of their policy actions. If inflation is too low, they’ll lower their target rate. Too high, and they’ll raise it. Over time, if the economy is stable, central banks will correct any short run problems introduced by interest rate targeting and eventually zero in on their inflation target. Unfortunately, this leaves the door open to painful short-term failures.

How do central banks fail in the short run?

First, I want to make it clear that short-term failures are bad. While long-term price stability is definitely a good thing, short-term fluctuations in the money supply can lead to recessions (remember our solid correlation between shrinking money supply and recessions). Even relatively minor short-term failures can have consequences for hundreds of thousands or millions of people whenever recessions lead to job losses.

Central banks most commonly fail in the short-run because of some sort of unexpected shock. Most commonly, shocks that lead to long recessions originate in the financial sector. The 2001 dot-com crash, for example, didn’t technically lead to a recession in the United States, despite the huge stock market losses [11].

This graph, from Wikimedia Commons, shows the scale of the losses in the NASDAQ Composite during the dot-com crash.

 

Shocks to the financial sector are unusually likely to cause recessions because of the key role that the financial sector plays in determining the monetary supply (via the deposit-reserve ratio we discussed above), as well as the key role that confidence in the financial sector plays (via the deposit-currency ratio).

When financial institutions run into trouble, they have to scramble for liquidity – for cash that they can have on hand in case people wish to withdraw their money [12] – which means they make fewer loans. Suddenly, the money multiplier that banks supply shrinks and the amount of money in the economy decreases.

Things can get even worse when the public loses faith in the banking system. If you suspect that a bank might fail, you will want to get your money out while you still can. Unfortunately, if everyone comes to believe this, then the bank will fail [13]. By design, it doesn’t have enough cash on hand to pay everyone back [14]. When this happens, it is called a “run” on the banks or a “bank run” and they’re thankfully becoming more and more rare. Many developed countries have ended them entirely with a program of deposit insurance. That’s the stickers you see on the door of your bank that promises your deposits will be returned to you, even if the bank fails [15].

This is one of the few images on my blog that isn't under some sort of Creative Commons license. I'm using it here under fair use, for the purpose of comment on the institution of deposit insurance. While we're here and talking about this, I think the prominent display requirement, while now not very useful, probably was once very important. When deposit insurance was new, you did really want people to see that their banks had insurance and feel secure. It's part of how deposit insurance makes itself less necessary. The very fact it exists prevents most of the bank runs it would pay out for.
Here’s what the stickers look like in Canada. According to the CDIC website (which is where I got this image), they must be prominently displayed.

It’s good that we’ve stopped bank runs, because they’re incredibly deflationary (they are very good at shrinking the money supply). This is due to the deposit-currency ratio being a key determinant of the total money supply. When people stop using banks, the deposit-currency ratio falls and the money supply decreases.

Since bank failures can occur quite suddenly and can spread throughout the financial system quickly, a financial crisis can cause a deflation that is too rapid for the central bank to react too. This is especially true because modern central banks have a general tendency to fear inflation much more than many monetarists believe they should [16]. This is really unfortunate! A slow response to a decrease in the growth of the money supply (whether caused by a financial crisis or something else) can easily turn into a recession or depression, with all the attendant misery.

Okay, but can you explain how this happens?

Many individuals and companies like to keep a certain amount of money on hand, if at all possible. When they have less money than this, they economize, until they feel comfortable with the amount of money they have. When they have more money, the tend to invest it or spend it.

When the money supply increases, either via by the central bank buying bonds, the government reducing reserve requirements, or people deciding to hold more of their money at banks, there are suddenly larger supplies of money at banks then they would like to hold on to.

Banks then spend this money (or invest it, which is essentially giving it to someone else to spend). The people banks give the money to immediately face the same problem; they have more money than they plan on holding. What follows is a game of hot potato, as everyone in the economy tries to keep their account balances where they want them (by spending money).

If there is free capacity in the economy (e.g. factories are idle, people are unemployed, etc.), then this free capacity eventually absorbs the money (that is to say: people who had less money on hand then they desired are quite happy to grab and hold onto the extra money). If there is very little free capacity in the economy however (i.e. unemployment is low, production high), then this money really cannot be spent to produce anything extra. Instead, we have more money, chasing the same amount of goods and services. The end result of that is prices increasing – what we call inflation – or, just as correctly, money becoming worth less.

Once prices rise, people realize they need to hold onto slightly more money and a new equilibrium is reached.

After all, the money that people are holding onto is really acting as a unit of account. It denotes how many days (or weeks, or months) of consumption they want to have easy access to. Inflation changes how much money you need to hold onto to keep the same number of days (weeks, months) of production [17].

Now, let’s run this whole thing in reverse. Instead of increasing the supply of money, the money supply is decreasing (or failing to grow at the expected rate). Maybe there were new reserve requirements, or a financial crash, or the central bank misjudged the amount of money it needed to create [18]. Regardless of how it happens, someone who was expecting to get some money isn’t going to get it.

This person (bank, corporation) will find themself having less cash on hand then they hoped for and will cut back on their spending. This spending was going to someone else who was hoping for it. And suddenly the whole economy is trying to collectively spend less money, which it can’t do right away.

Instead, money becomes relatively more valuable as everyone scrambles for it. This looks like prices going down.

The price of labour (wages), might, in theory be expected to go down, but in practice it doesn’t. It’s very emotionally taxing to try and convince many employees to accept pay cuts (in addition to being bad for morale), so firms tend to prefer pay freezes, cutting back on contract labour, switching some workers to part-time, and firings to pay cuts [19].

Decreased growth in the rate of money affects more than just workers. Factories close or sit idle. Economic capacity diminishes. Ultimately, the whole economy can spend less, if some of the economy is gone.

All of these taken together are the hallmarks of recession. We see job losses, idle capacity, and closures. And we can directly point at failures of central bank policy as the culprit.

Can changes in the growth rate of money affect anything else?

There are three interesting relationships between inflation and employment.

First, it seems that higher than expected inflation leads to increased employment. Friedman and Schwartz speculated that this occurs because corporations are better positioned to see inflation than workers. When they see evidence of inflation, they can quickly hire workers at previously normal salaries. These salaries represent something of a discount when there’s unexpected inflation, so it’s quite a steal for the companies.

Unfortunately, this effect doesn’t persist. As soon as everyone understands that inflation has increased, they bake this into their expectations of salaries and raises. Labour stops being artificially cheap, and companies may end up letting go of some of the newly hired workers.

Second, it seems that increasing money supply is correlated with increasing real wages, that is, wages that are already adjusted for inflation. While it makes sense that inflation will lead to an increase in nominal wages (that is, inflation leads to higher salaries, even if those salaries cannot buy anything extra), it’s a bit odder that it leads to higher real wages. I haven’t yet seen an explanation for why this is true, but it’s an interesting tidbit and one I hope to understand better in the future [20].

Finally, inflation can play an important role in avoiding job losses. Not all economic downturns are caused by central banks. Sometimes, the shock is external (like an earthquake, commodity crash, or a trade embargo). In these cases, certain sectors of the economy may be facing losses and may respond with firing (as we saw above, wage cuts are rarely considered a tenable option). However, inflation can act as an implicit wage cut and stop job losses long enough for the economy to adjust.

If salaries are kept constant while inflation continues apace (or even increases), they become relatively less expensive, all without the emotional toll that wage cuts take. This can protect jobs and engineer a “soft landing”, where a shock doesn’t lead to any large-scale job losses.

Obviously, this has to be temporary, so as not to erode the purchasing power of workers too much, but most shocks are temporary, so this is not a difficult constraint.

Okay, what does this say about policy?

There are three main policy takeaways from this post.

First, interest rates are a bad policy indicator. It’s hard for people to break their association between easy money and low interest rates, which means monetary policy is likely to end up too tight. The best analogy I’ve heard for interest rates are a steering wheel that sometimes points a bus left when turned left and sometimes points the bus left when turned right. If you wouldn’t get in a bus driven like that, you shouldn’t be thrilled about being in an economy that’s being driven in the exact same way.

Second, a stable monetary policy is very useful. Note that stable monetary policy implies neither stable interest rates, nor stable inflation. Rather, a stable monetary policy means that everyone can have confidence that the central bank will act in predictable and productive ways. Stable monetary policy smooths out the peaks and valleys of the business cycle. It stops highs from becoming too speculative and keeps lows from leading to terrible grinding unemployment. It also lets unions and workers bargain for long-term wage increases and allows companies to grant them without being scared they’ll become unsustainable due to below-target inflation.

Third, expectations are a powerful tool. If banks believe that the central bank will print lots of money (and buy lots of assets) during a crisis, they won’t have to stop making loans, or increase their reserves. Sometimes, the mere expectation of a forceful government intervention prevents any need for the intervention (like with deposit insurance; it rarely pays out because its existence has drastically reduced the need for it). Had the Federal Reserve reacted more aggressively to the financial crisis, it may have been possible to avoid the massive bailout to financial companies.

I know that “the money supply” will never be a progressive priority. But I think it’s a thing that progressives should care about. Billionaires may not like bad monetary policy, but they aren’t the ones who feel the brunt of its failure. Those are the workers who are laid off, or the pensioners who lose their savings.

I hope I’ve made the case that in order to care about them, we need to care about how money works.

Further Reading and Sources

I drew heavily on Money in Historical Perspective, by Anna J. Schwartz when writing this blog post. The papers Money and Business Cycles (1963, with Milton Friedman), Why Money Matters (1969), The Importance of Stable Money: Theory and Evidence (1983, with Michael D. Bordo), and Real and Pseudo-Financial Crises (1986) were particularly informative.

Scott Sumner’s blog The Money Illusion is an excellent resource for current monetarist thought, while J. P. Koning’s blog Moneyness provides many excellent historical anecdotes about money.

Footnotes

Like all of my posts about economics, this one contains way too many footnotes. These footnotes are mainly clarifying anecdotes, definitions, and comments. I’ve relegated them here because they aren’t necessary for understanding this post, but I think they still can be useful.

[1] Separately, banks create currency for day to day use based on the public’s demand for currency. The more you go to the ATM, the more bills the central bank creates for you to withdraw. Banks return currency to the central bank every so often (either to buy assets the central bank holds, or to replace it with its digital equivalents). If fewer people want cash and ATMs are overprovisioned, banks will deposit more cash with the central bank than they, as a whole, withdraw.

Therefore, while the central bank controls the growth of the money supply, the public collectively determines the growth in the cash supply. While in general the cash supply continues to grow, this may change as more and more commerce becomes digital. Sweden has already reached peak cash and is now seeing their total cash supply decline (without a corresponding decrease in money supply). ^

[2] That would be to say, that money decreases at or near the peak of a business cycles because of some delayed effect from the previous business cycle, rather than as an independent variable that will affect the current business cycle. ^

[3] Furthermore, it seems that depressions can be transmitted among countries with a common currency source (e.g. the gold standard, the current international dollar based payment regime), but are less likely transmitted outside of their home regime. China, for example, did not see a contraction during the first part of the Great Depression (it used silver as its monetary base, rather than gold) and only saw a contraction once the US began buying up silver, effectively shrinking the Chinese monetary supply. ^

[4] Although crucially, they don’t allow instant withdrawals, because they require some time to sell assets. ^

[5] We aren’t losing anything by making this distinction. The growth of products like credit cards has not affected the monetary transmission mechanism, see Has the Growth of Money Substitutes Hindered Monetary Policy? By Anna J Schwartz and Philip Cagan, 1975. ^

[6] Financial terms referring to banks are often oddly inverted. Customer deposits with banks are termed liabilities (as the bank is liable to return them), while loans the bank has made are assets (as someone else will hopefully pay the bank back for them). If you want to see which of your friends have been reading about economics, say “I think a lot of the loans that bank made have become liabilities”. The ones who visibly twitch or look confused are the ones studying economics. ^

[7] In addition to regulation, government policy can affect the deposit-reserve ratio. In the aftermath of the 2007-2008 financial crisis, the Federal Reserve began, for the first time, to pay interest on reserves (both required reserves and excess reserves). This move led to a huge increase in excess reserves (to more than 16x required reserves by 2011; this happened because banks became very risk averse during the crisis and getting interest on their excess reserves became a risk-free way to make money) and a precipitous drop in the deposit-reserve ratio, which, as we discussed above, means a precipitous drop in the supply of money (which tends to lead to recessions and depressions). Scott Sumner calls this one of the greatest ever failures of monetary policy. ^

[8] In addition to cutting back on loans, this often results in banks selling assets, to try and increase the amount of cash they have on hand. If multiple banks run into trouble at once and they sell similar assets at the same time the value of the assets can drop precipitously, forcing other banks to sell and raising the possibility of multiple bank failures. This is called contagion, a word that came up a lot in the aftermath of the 2007-2008 financial crisis. ^

[9] “Full employment” is a term economists use to mean “the unemployment rate during neutral macroeconomic conditions”, which is simply the unemployment rate outside of a recession or a speculative bubble. It’s my opinion that full employment is heavily dependent on the political and culture features of a country. Canada and America, for example, have rather different full employment rates (Canada’s allows more unemployment). I’d argue this is because Canada has more of a social safety net, which would imply that some people working in the US at “full employment” really would prefer not to work, but feel they have no other choice. This seems to fit well with empirical data. For example, when the extended unemployment benefits program ended in 2015, we simultaneously saw a drop in the unemployment rate and a decrease in wages. This is consistent with unemployed people suddenly scrambling for jobs at rather worse terms than they’d previously hoped for. ^

[10] Narrow exceptions apply and normally represent some sort of promotion or implicit sale. For example, short-term car loans on last year’s models will often be discounted below the target rate. It is generally a good idea to take a short-term loan at a below-target interest rate rather than pay a lump. This is not financial advice. ^

[11] Technically, for an event to qualify as a recession, there must be two quarters of successive contraction in national GDP. This never occurred during (or after) the Dot-com crash. Interestingly, the initial contraction was immediately preceded by the Federal Reserve signalling its intent to tighten monetary policy so as to rein in speculation, which it did by raising the interest rate target three times in quick succession. When markets crashed, it quickly reversed course, which may have played a role in averting a longer recession. ^

[12] This is another way of saying either “they try and return a deposit-reserve ratio that has become too high to normal” or “they try and shrink their deposit-reserve ratio”. In either case, the money supply is going to shrink. ^

[13] Banks, as Matt Levine likes to say are “a magical place that transforms risky illiquid long-term loans into safe immediately accessible deposits.” He goes on to point out that “like most magic, this requires a certain suspension of disbelief”. This is pretty socially useful; we want people to trust their bank accounts, but we also want loans for things like houses and factories and college to exist. Most of the time the magic works and everything is fine. But if people stop believing in the magic, it turns out that the guy behind the curtain is a bunch of loans that you can’t call due right away. If you try to, the bank fails. ^

[14] Remember, this is generally a good thing as it makes bank services much more affordable. If banks held onto all their reserves, banking services would be very expensive and many more disadvantaged people would be unbanked. ^

[15] Before insurance, only the first people to get to the bank would get their money back. This meant that you had a strong incentive to pull your money out at the very first sign of trouble. Otherwise stable and well-run banks could be undone by a rumour, as everyone panicked and flocked to the withdrawal counter. Deposit insurance changes the game; now no one has to rush to be first, which means no one needs to withdraw at all. ^

[16] Runaway inflation is bad! But a decrease in the money supply, or a decrease in the growth rate of the money supply is bad as well. A very irresponsible program of monetary growth could trigger double digit inflation. Failure to respond promptly to a decrease in the growth rate of money will cause a recession. Unfortunately, central banks aren’t blamed for recessions (by the government or the general populace) but are blamed for inflation, so they tend to act to minimize their chance of being blamed, instead of acting to maximize social good. ^

[17] Now in real life (as opposed to this simplified model), people probably don’t immediately spend or invest absolutely every extra dollar they get. They may expect to spend some extra in the near future and want to hold it in cash, or they may want to build up more than of a cushion.

This would be an example of an inelastic relationship, where a change in one variable (money supply) leads to a less than proportional change in another (spending/investment).

Still, the more money that is dumped into the economy, the closer we get to the idealized model. If you win $100 in a lottery, you may just leave it in your bank account. But if you win $1,000,000 you’re going to be spending some of it and investing a lot of the rest. ^

[18] Remember, it is possible for the central bank to increase interest rates (create less money) without changing the monetary growth rate. If banks are creating a lot of money and the economy is already at capacity, the central bank can sometimes safely cut back on the amount of money it’s creating while still allowing adequate money to be created by banks. This is why central banks often raise interest rates during booms. It can be necessary to keep inflation from rising. ^

[19] I am not the first to wonder if co-ops might be more “recession-proof” than conventional firms. Since co-ops generally operate via profit-sharing, rather than set wages, they may exhibit less downwards nominal wage rigidity (the economic term for people’s aversion to pay cuts), which means they might weather recessions with wage cuts, rather than outright job losses. I haven’t been able to find any studies on this subject, but I’d be very interested to see if they exist. ^

[20] There is a strain of leftist thought that views Paul Volcker reining in inflation as much worse for workers than any policy of Reagan’s. I’m trying to find a better explanation of this position somewhere and plan to write about it once I do. ^

Economics, Politics, Quick Fix

Against Degrowth

Degrowth is the political platform that holds our current economic growth as unsustainable and advocates for a radical reduction in our resource consumption. Critically, it rejects that this reduction can occur at the same time as our GDP continues to grow. Degrowth, per its backers, requires an actual contraction of the economy.

The Canadian New Democratic Party came perilously close to being taken over by advocates of degrowth during its last leadership race, which goes to show just how much leftist support the movement has gained since its debut in 2008.

I believe that degrowth is one of the least sensible policies being advocated for by elements of the modern left. This post collects my three main arguments against degrowth in a package that is easy to link to in other online discussions.

To my mind, advocates of degrowth fail to advocate a positive vision of transition to a less environmentally intensive economy. North America is already experiencing a resurgence in forest cover, land devoted to agriculture worldwide has been stable for the past 15 years (and will probably begin to decline by 2050), as arable land use per person continues to decrease. In Canada, CO2 emissions per capita peaked in 1979, forty years ago. Total CO2 emissions peaked in 2008 and CO2 emissions per $ of GDP have been continuously falling since 1990.

All of this is evidence of an economy slowly shifting away from stuff. For an economy to grow as people turn away from stuff, they have to consume something else, for consumers often means services and experiences. Instead of degrowth, I think we should accelerate this process.

It is very possible to have GDP growth while rapidly decarbonizing an economy. This simply looks like people shifting their consumption from things (e.g. cars, big houses) towards experiences (locally sourced dinners, mountain biking their local trails). We can accelerate this switch by “internalizing the externality” that carbon presents, which is a fancy way of saying “imposing a tax on carbon”. Global warming is bad and when we actually make people pay that cost as part of the price tag for what they consume, they switch their consumption habits. Higher gas prices, for example, tend to push consumers away from SUVs.

A responsible decarbonisation push emphasises and supports growth in local service industries to make up for the loss of jobs in manufacturing and resource extraction. There’s a lot going for these jobs too; many of them give much more autonomy than manufacturing jobs (a strong determinant of job satisfaction) and they are, by their nature, rooted in local communities and hard to outsource.

(There are, of course, also many new jobs in clean energy that a decarbonizing and de-intensifying economy will create).

If, instead of pushing the economy towards a shift in how money is spent, you are pushing for an overall reduction in GDP, you are advocating for a decrease in industrial production without replacing it with anything. This is code for “decreasing standards of living”, or more succinctly, “a recession”. That is, after all, what we call a period of falling GDP.

This, I think is the biggest problem with advocating degrowth. Voters are liable to punish governments even for recessions that aren’t their fault. If a government deliberately causes a recession, the backlash will be fierce. It seems likely there is no way to continue the process of degrowth by democratic means once it is started.

This leaves two bad options: give over the reins of power to a government that will be reflexively committed to opposing environmentalists, or seize power by force. I hope that it is clear that both of these outcomes to a degrowth agenda would be disastrous.

Advocates of degrowth call my suggestions unrealistic, or outside of historical patterns. But this is clearly not the case; I’ve cited extensive historical data that shows an ongoing trend towards decarbonisation and de-intensification, both in North America and around the world. What is more unrealistic: to believe that the government can intensify an existing trend, or to believe that a government could be elected on a platform of triggering a recession? If anyone is guilty of pie-in-the-sky thinking here, it is not me.

Degrowth steals activist energy from sensible, effective policy positions (like a tax on carbon) that are politically attainable and likely to lead to a prosperous economy. Degrowth, as a policy, is especially easy for conservatives to dismiss and unwittingly aids them in their attempts to create a false dichotomy between environmental protection and a thriving economy.

It’s for these three reasons (the possibility of building thriving low carbon economies, the democratic problem, and the false dichotomy degrowth sets up) that I believe reasonable people have a strong responsibility to argue against degrowth, whenever it is advocated.

(For a positive alternative to degrowth, I personally recommend ecomoderism, but there are several good alternatives.)

Economics, Quick Fix

The First-Time Home Buyer Incentive is a Disaster

The 2019 Budget introduced by the Liberal government includes one of the worst policies I’ve ever seen.

The CMHC First-Time Home Buyer Incentive provides up to 10% of the purchase price of a house (5% for existing homes, 10% for new homes) to any household buying a home for the first time with an annual income up to $120,000. To qualify, the total mortgage must be less than four times the household’s yearly income and the mortgage must be insured, which means that any house costing more than $590,000 [1] is ineligible for this program. The government will recoup its 5-10% stake when the home is sold.

The cap on eligible house price is this program’s only saving grace. Everything else about it is awful.

Now I want to be clear: housing affordability is a problem, especially in urban areas. Housing costs are increasing above inflation in Canada (by about 7.5% since 2002) and many young people are finding that it is much more difficult for them to buy homes than it was for their parents and grandparents. Rising housing costs are swelling the suburbs, encouraging driving, and making the transition to a low carbon economy harder. Something needs to be done about housing affordability.

This plan is not that “something”.

This plan, like many other aspects of our society, is predicated on the idea that housing should be a “good investment”. There’s just one problem with that: for something to be a “good investment”, it must rise in price more quickly than inflation. Therefore, it is impossible for housing to be simultaneously a good investment and affordable, at least in the long term. If housing is a good investment now, it will be unaffordable for the next generation. And so on.

I’m not even sure this incentive will help anyone in the short term though, because with constrained housing supply (as it is in urban areas, where zoning prevents much new housing from being built), housing costs are determined based on what people can afford. As long as there are more people that would like to live in a city than houses for them to live in, people are in competition for the limited supply of housing. If you were willing to spend some amount of your salary on a house before this incentive, you can just afford to pay more money after the incentive. You don’t end up any better off as the money is passed on to someone else. Really, this benefit is a regressive transfer of money to already-wealthy homeowners, or a subsidy to the construction industry.

The worst part is that buying a house at an inflated valuation isn’t even irrational! The fact of the matter is that as long as everyone knows that governments at all levels are committed to maintaining the status quo – where housing prices cannot be allowed to drop – the longer housing costs will continue to rise. Why shouldn’t anyone who can afford to stick all their savings into a home do so, when they know it’s the only investment they can make that the government will protect from failing [2]?

That’s what’s truly pernicious about this plan: it locks up government money in a speculative bet on housing. Any future decline in housing costs won’t just hurt homeowners. With this incentive, it will hurt the government too [3]. This gives the federal government a strong incentive to keep housing prices high (read: unaffordable), even after some inevitable future round of austerity removes this credit. This is the opposite of what we want the federal government to be doing!

The only path towards broadly affordable housing prices is the removal of all implicit and explicit subsidies, an action that will make it clear that housing prices won’t keep rising (which will have the added benefit of ending speculation on houses, another source of unaffordability). This wouldn’t just mean scaling back policies like this one; it means that we need to get serious about zoning reform and adopt a policy like the one that has kept housing prices in Tokyo stable. Our current style of zoning is broken and accounts for an increasing percentage of housing prices in urban areas.

Zoning began as a way to enforce racial segregation. Today, it enforces not just racial, but financial segregation, forcing immigrants, the young, and everyone else who isn’t well off towards the peripheries of our cities and our societies.

Serious work towards housing affordability would strike back against zoning. This incentive provides a temporary palliative without addressing the root cause, while tying the government’s financial wellbeing to high home prices. Everyone struggling with housing affordability deserves better.

Footnotes

[1] Mortgage insurance is required for any down payment less than 20%. If you have an income of $120,000 and you max out the down payment, then the mortgage of $480,000 would be about 81% of the total price. Division tells us the total price in this case would be $592,592.59, although obviously few people will be positioned to max out the benefit. ^

[2] Currently, the best argument against buying a home is the chance that the government will one day wake up to the crisis it is creating and withdraw some of its subsidies. It is, in general, not wise to make heavily leveraged bets that will only pay off if subsidies are left in place, but a bet on housing has so far been an exception to this rule. ^

[3] Technically, it will hurt the Canadian Mortgage and Housing Corporation, but given that this is the crown corporation responsible for mortgage insurance, a decline in home prices could leave it undercapitalized to the point where the government has to step in even before this policy was enacted. With this policy, a bailout in response to lower home prices seems even more likely. ^

Economics, Model

Why External Debt is so Dangerous to Developing Countries

I have previously written about how to evaluate and think about public debt in stable, developed countries. There, the overall message was that the dangers of debt were often (but not always) overhyped and cynically used by certain politicians. In a throwaway remark, I suggested the case was rather different for developing countries. This post unpacks that remark. It looks at why things go so poorly when developing countries take on debt and lays out a set of policies that I think could help developing countries that have high debt loads.

The very first difference in debt between developed and developing countries lies in the available terms of credit; developing countries get much worse terms. This makes sense, as they’re often much more likely to default on their debt. Interest scales with risk and it just is riskier to lend money to Zimbabwe than to Canada.

But interest payments aren’t the only way in which developing countries get worse terms. They are also given fewer options for the currency they take loans out in. And by fewer, I mean very few. I don’t think many developing countries are getting loans that aren’t denominated in US dollars, Euros, or, if dealing with China, Yuan. Contrast this with Canada, which has no problem taking out loans in its own currency.

When you own the currency of your debts, you can devalue it in response to high debt loads, making your debts cheaper to pay off in real terms (that is to say, your debt will be equivalent to fewer goods and services than it was before you caused inflation by devaluing your currency). This is bad for lenders. In the event of devaluation, they lose money. Depending on the severity of the inflation, it could be worse for them than a simple default would be, because they cannot even try and recover part of the loan in court proceedings.

(Devaluations don’t have to be large to be reduce debt costs; they can also take the form of slightly higher inflation, such that interest is essentially nil on any loans. This is still quite bad for lenders and savers, although less likely to be worse than an actual default. The real risk comes when a country with little economic sophistication tries to engineer slightly higher inflation. It seems likely that they could drastically overshoot, with all of the attendant consequences.)

Devaluations and inflation are also politically fraught. They are especially hard on pensioners and anyone living on a fixed income – which is exactly the population most likely to make their displeasure felt at the ballot box. Lenders know that many interest groups would oppose a Canadian devaluation, but these sorts of governance controls and civil society pressure groups often just doesn’t exist (or are easily ignored by authoritarian leaders) in the developing world, which means devaluations can be less politically difficult [1].

Having the option to devalue isn’t the only reason why you might want your debts denominated in your own currency (after all, it is rarely exercised). Having debts denominated in a foreign currency can be very disruptive to the domestic priorities of your country.

The Canadian dollar is primarily used by Canadians to buy stuff they want [2]. The Canadian government naturally ends up with Canadian dollars when people pay their taxes. This makes the loan repayment process very simple. Canadians just need to do what they’d do anyway and as long as tax rates are sufficient, loans will be repaid.

When a developing country takes out a loan denominated in foreign currency, they need some way to turn domestic production into that foreign currency in order to make repayments. This is only possible insofar as their economy produces something that people using the loan currency (often USD) want. Notably, this could be very different than what the people in the country want.

For example, the people of a country could want to grow staple crops, like cassava or maize. Unfortunately, they won’t really be able to sell these staples for USD; there isn’t much market for either in the US. There very well could be room for the country to export bananas to the US, but this means that some of their farmland must be diverted away from growing staples for domestic consumption and towards growing cash crops for foreign consumption. The government will have an incentive to push people towards this type of agriculture, because they need commodities that can be sold for USD in order to make their loan payments [3].

As long as the need for foreign currency persists, countries can be locked into resource extraction and left unable to progress towards a more mature manufacturing- or knowledge-based economies.

This is bad enough, but there’s often greater economic damage when a country defaults on its foreign loans – and default many developing countries will, because they take on debt in a highly procyclical way [4].

A variable, indicator, or quantity is said to be procyclical if it is correlated with the overall health of an economy. We say that developing nation debt is procyclical because it tends to expand while economies are undergoing expansion. Specifically, new developing country debts seem to be correlated with many commodity prices. When commodity prices are high, it’s easier for developing countries that export them to take on debt.

It’s easy to see why this might be the case. Increasing commodity prices make the economies of developing countries look better. Exporting commodities can bring in a lot of money, which can have spillover effects that help the broader economy. As long as taxation isn’t too much a mess, export revenues make government revenues higher. All of this makes a country look like a safer bet, which makes credit cheaper, which makes a country more likely to take it on.

Unfortunately (for resource dependent countries; fortunately for consumes), most commodity price increases do not last forever. It is important to remember that prices are a signal – and that high prices are a giant flag that says “here be money”. Persistently high prices lead to increased production, which can eventually lead to a glut and falling prices. This most recently and spectacularly happened in 2014-2015, as American and Canadian unconventional oil and gas extraction led to a crash in the global price of oil [5].

When commodity prices crash, indebted, export-dependent countries are in big trouble. They are saddled with debt that is doubly difficult to pay back. First, their primary source of foreign cash for paying off their debts is gone with the crash in commodity prices (this will look like their currency plummeting in value). Second, their domestic tax base is much lower, starving them of revenue.

Even if a country wants to keep paying its debts, a commodity crash can leave them with no choice but a default. A dismal exchange rate and minuscule government revenues mean that the money to pay back dollar denominated debts just doesn’t exist.

Oddly enough, defaulting can offer some relief from problems; it often comes bundled with a restructuring, which results in lower debt payments. Unfortunately, this relief tends to be temporary. Unless it’s coupled with strict austerity, it tends to lead into another problem: devastating inflation.

Countries that end up defaulting on external debt are generally not living within their long-term means. Often, they’re providing a level of public services that are unsustainable without foreign borrowing, or they’re seeing so much government money diverted by corrupt officials that foreign debt is the only way to keep the lights on. One inevitable effect of a default is losing access to credit markets. Even when a restructuring can stem the short-term bleeding, there is often a budget hole left behind when the foreign cash dries up [6]. Inflation occurs because many governments with weak institutions fill this budgetary void with the printing press.

There is nothing inherently wrong with printing money, just like there’s nothing inherently wrong with having a shot of whiskey. A shot of whiskey can give you the courage to ask out the cute person at the bar; it can get you nerved up to sing in front of your friends. Or it can lead to ten more shots and a crushing hangover. Printing money is like taking shots. In some circumstances, it can really improve your life, it’s fine in moderation, but if you overdue it you’re in for a bad time.

When developing countries turn to the printing press, they often do it like a sailor turning to whiskey after six weeks of enforced sobriety.

Teachers need to be paid? Print some money. Social assistance? Print more money. Roads need to be maintained? Print even more money.

The money supply should normally expand only slightly more quickly than economic growth [7]. When it expands more quickly, prices begin to increase in lockstep. People are still paid, but the money is worth less. Savings disappear. Velocity (the speed with which money travels through the economy) increases as people try and spend money as quickly as possible, driving prices ever higher.

As the currency becomes less and less valuable, it becomes harder and harder to pay for imports. We’ve already talked about how you can only buy external goods in your own currency to the extent that people outside your country have a use for your currency. No one has a use for a rapidly inflating currency. This is why Venezuela is facing shortages of food and medicine – commodities it formerly imported but now cannot afford.

The terminal state of inflation is hyperinflation, where people need to put their currency in wheelbarrows to do anything with it. Anyone who has read about Germany in the 1930s knows that hyperinflation opens the door to demagogues and coups – to anything or anyone who can convince the people that the suffering can be stopped.

Taking into account all of this – the inflation, the banana plantations, the boom and bust cycles – it seems clear that it might be better if developing countries took on less debt. Why don’t they?

One possible explanation is the IMF (International Monetary Fund). The IMF often acts as a lender of last resort, giving countries bridging loans and negotiating new repayment terms when the prospect of default is raised. The measures that the IMF takes to help countries repay their debts have earned it many critics who rightly note that there can be a human cost to the budget cuts the IMF demands as a condition for aid [8]. Unfortunately, this is not the only way the IMF might make sovereign defaults worse. It also seems likely that the IMF represents a significant moral hazard, one that encourages risky lending to countries that cannot sustain debt loads long-term [9].

A moral hazard is any situation in which someone takes risks knowing that they won’t have to pay the penalty if their bet goes sour. Within the context of international debt and the IMF, a moral hazard arises when lenders know that they will be able to count on an IMF bailout to help them recover their principle in the event of a default.

In a world without the IMF, it is very possible that borrowing costs would be higher for developing countries, which could serve as a deterrent to taking on debt.

(It’s also possible that countries with weak institutions and bad governance will always take on unsustainable levels of debt, absent some external force stopping them. It’s for this reason that I’d prefer some sort of qualified ban on loaning to developing countries that have debt above some small fraction of their GDP over any plan that relies on abolishing the IMF in the hopes of solving all problems related to developing country debt.)

Paired with a qualified ban on new debt [10], I think there are two good arguments for forgiving much of the debt currently held by many developing countries.

First and simplest are the humanitarian reasons. Freed of debt burdens, developing countries might be able to provide more services for their citizens, or invest in infrastructure so that they could grow more quickly. Debt forgiveness would have to be paired with institutional reform and increased transparency, so that newfound surpluses aren’t diverted into the pockets of kleptocrats, which means any forgiveness policy could have the added benefit of acting as a big stick to force much needed governance changes.

Second is the doctrine of odious debts. An odious debt is any debt incurred by a despotic leader for the purpose of enriching themself or their cronies, or repressing their citizens. Under the legal doctrine of odious debts, these debts should be treated as the personal debt of the despot and wiped out whenever there is a change in regime. The logic behind this doctrine is simple: by loaning to a despot and enabling their repression, the creditors committed a violent act against the people of the country. Those people should have no obligation (legal or moral) to pay back their aggressors.

The doctrine of odious debts wouldn’t apply to every indebted developing country, but serious arguments can be made that several countries (such as Venezuela) should expect at least some reduction in their debts should the local regime change and international legal scholars (and courts) recognize the odious debt principle.

Until international progress is made on a clear list of conditions under which countries cannot take on new debt and a comprehensive program of debt forgiveness, we’re going to see the same cycle repeat over and over again. Countries will take on debt when their commodities are expensive, locking them into an economy dependent on resource extraction. Then prices will fall, default will loom, and the IMF will protect investors. Countries are left gutted, lenders are left rich, taxpayers the world over hold the bag, and poverty and misery continue – until the cycle starts over once again.

A global economy without this cycle of boom, bust, and poverty might be one of our best chances of providing stable, sustainable growth to everyone in the world. I hope one day we get to see it.

Footnotes

[1] I so wanted to get through this post without any footnotes, but here we are.

There’s one other reason why e.g. Canada is a lower risk for devaluation than e.g. Venezuela: central bank independence. The Bank of Canada is staffed by expert economists and somewhat isolated from political interference. It is unclear just how much it would be willing to devalue the currency, even if that was the desire of the Government of Canada.

Monetary policy is one lever of power that almost no developed country is willing to trust directly to politicians, a safeguard that doesn’t exist in all developing countries. Without it, devaluation and inflation risk are much higher. ^

[2] Secondarily it’s used to speculatively bet on the health of the resource extraction portion of the global economy, but that’s not like, too major of a thing. ^

[3] It’s not that the government is directly selling the bananas for USD. It’s that the government collects taxes in the local currency and the local currency cannot be converted to USD unless the country has something that USD holders want. Exchange rates are determined based on how much people want to hold one currency vs. another. A decrease in the value of products produced by a country relative to other parts of the global economy means that people will be less interested in holding that country’s currency and its value will fall. This is what happened in 2015 to the Canadian dollar; oil prices fell (while other commodity prices held steady) and the value of the dollar dropped.

Countries that are heavily dependent on the export of only one or two commodities can see wild swings in their currencies as those underlying commodities change in value. The Russian ruble, for example, is very tightly linked to the price of oil; it lost half its value between 2014 and 2016, during the oil price slump. This is a much larger depreciation than the Canadian dollar (which also suffered, but was buoyed up by Canada’s greater economic diversity). ^

[4] This section is drawn from the research of Dr. Karmen Reinhart and Dr. Kenneth Rogoff, as reported in This Time Is Different, Chapter 5: Cycles of Default on External Debt. ^

[5] This is why peak oil theories ultimately fell apart. Proponents didn’t realize that consistently high oil prices would lead to the exploitation of unconventional hydrocarbons. The initial research and development of these new sources made sense only because of the sky-high oil prices of the day. In an efficient market, profits will always eventually return to 0. We don’t have a perfectly efficient market, but it’s efficient enough that commodity prices rarely stay too high for too long. ^

[6] Access to foreign cash is gone because no one lends money to countries that just defaulted on their debts. Access to external credit does often come back the next time there’s a commodity bubble, but that could be a decade in the future. ^

[7] In some downturns, a bit of extra inflation can help lower sticky wages in real terms and return a country to full employment. My reading suggests that commodity crashes are not one of those cases. ^

[8] I’m cynical enough to believe that there is enough graft in most of these cases that human costs could be largely averted, if only the leaders of the country were forced to see their graft dry up. I’m also pragmatic enough to believe that this will rarely happen. I do believe that one positive impact of the IMF getting involved is that its status as an international institution gives it more power with which to force transparency upon debtor nations and attempt to stop diversion of public money to well-connected insiders. ^

[9] A quick search found two papers that claimed there was a moral hazard associated with the IMF and one article hosted by the IMF (and as far as I can tell, later at least somewhat repudiated by the author in the book cited in [4]) that claims there is no moral hazard. Draw what conclusions from this you will. ^

[10] I’m not entirely sure what such a ban would look like, but I’m thinking some hard cap on amount loaned based on percent of GDP, with the percent able to rise in response to reforms that boost transparency, cut corruption, and establish modern safeguards on the central bank. ^

Economics, History

Scrip Stamp Currencies Aren’t A Miracle

A friend of mine recently linked to a story about stamp scrip currencies in a discussion about Initiative Q [1]. Stamp scrip currencies are an interesting monetary technology. They’re bank notes that require weekly or monthly stamps in order to be valid. These stamps cost money (normally a few percent of the face value of the note), which imposes a cost on holding the currency. This is supposed to encourage spending and spur economic activity.

This isn’t just theory. It actually happened. In the Austrian town of Wörgl, a scrip currency was used to great effect for several months during the Great Depression, leading to a sudden increase in employment, money for necessary public works, and a general reversal of fortunes that had, until that point, been quite dismal. Several other towns copied the experiment and saw similar gains, until the central bank stepped in and put a stop to the whole thing.

In the version of the story I’ve read, this is held up as an example of local adaptability and creativity crushed by centralization. The moral, I think, is that we should trust local institutions instead of central banks and be on the lookout for similar local currency strategies we could adopt.

If this is all true, it seems like stamp scrip currency (or some modern version of it, perhaps applying the stamps digitally) might be a good idea. Is this the case?

My first, cheeky reaction, is “we already have this now; it’s called inflation.” My second reaction is actually the same as my first one, but has an accompanying blog post. Thus.

Currency arrangements feel natural and unchanging, which can mislead modern readers when they’re thinking about currencies used in the 1930s. We’re very used to floating fiat currencies, that (in general) have a stable price level except for 1-3% inflation every year.

This wasn’t always the case! Historically, there was very little inflation. Currency was backed by gold at a stable ratio (there were 23.2 grains of gold in a US dollar from 1834 until 1934). For a long time, growth in global gold stocks roughly tracked total growth in economic activity, so there was no long-run inflation or deflation (short-run deflation did cause several recessions, until new gold finds bridged the gap in supply).

During the Great Depression, there was worldwide gold hoarding [2]. Countries saw their currency stocks decline or fail to keep up with the growth rate required for full economic activity (having a gold backed currency meant that the central bank had to decrease currency stocks whenever their gold stocks fell). Existing money increased in value, which meant people hoarded that too. The result was economic ruin.

In this context, a scrip currency accomplished two things. First, it immediately provided more money. The scrip currency was backed by the national currency of Austria, but it was probably using a fractional reserve system – each backing schilling might have been used to issue several stamp scrip schillings [3]. This meant that the town of Wörgl quickly had a lot more money circulating. Perhaps one of the best features of the scrip currency within the context of the Great Depression was that it was localized, which meant that it’s helpful effects didn’t diffuse.

(Of course, a central bank could have accomplished the same thing by printing vastly more money over a vastly larger area, but there was very little appetite for this among central banks during the Great Depression, much to everyone’s detriment. The localization of the scrip is only an advantage within the context of central banks failing to ensure adequate monetary growth; in a more normal environment, it would be a liability that prevented trade.)

Second to this, the stamp scrip currency provided an incentive to spend money.

Here’s one model of job loss in recessions: people (for whatever reason; deflation is just one cause) want to spend less money (economists call this “a decrease in aggregate demand”). Businesses see the falling demand and need to take action to cut wages or else become unprofitable. Now people generally exhibit “downward nominal wage rigidity” – they don’t like pay cuts.

Furthermore, individuals don’t realize that demand is down as quickly as businesses do. They hold out for jobs at the same wage rate. This leads to unemployment [4].

Stamp scrip currencies increase aggregate demand by giving people an incentive to spend their money now.

Importantly, there’s nothing magic about the particular method you choose to do this. Central banks targeting 2% inflation year on year (and succeeding for once [5]) should be just as effective as scrip currencies charging 2% of the face value every year [6]. As long as you’re charged some sort of fee for holding onto money, you’re going to want to spend it.

Central bank backed currencies are ultimately preferable when the central bank is getting things right, because they facilitate longer range commerce and trade, are administratively simpler (you don’t need to go buy stamps ever), and centralization allows for more sophisticated economic monitoring and price level targeting [7].

Still, in situations where the central bank fails, stamp scrip currencies can be a useful temporary stopgap.

That said, I think a general caution is needed when thinking about situations like this. There are few times in economic history as different from the present day as the Great Depression. The very fact that there was unemployment north of 20% and many empty factories makes it miles away from the economic situation right now. I would suspect that radical interventions that were useful during the Great Depression might be useless or actively harmful right now, simply due to this difference in circumstances.

Footnotes

[1] My opinion is that their marketing structure is kind of cringey (my Facebook feed currently reminds me of all of the “Paul Allen is giving away his money” chain emails from the 90s and I have only myself to blame) and their monetary policy has two aims that could end up in conflict. On the other hand, it’s fun to watch the numbers go up and idly speculate about what you could do if it was worth anything. I would cautiously recommend Q ahead of lottery tickets but not ahead of saving for retirement. ^

[2] See “The Midas Paradox” by Scott Sumner for a more in-depth breakdown. You can also get an introduction to monetary theories of the business cycle on his blog, or listen to him talk about the Great Depression on Vimeo. ^

[3] The size of the effect talked about in the article suggests that one of three things had to be true: 1) the scrip currency was fractionally backed, 2) Wörgl had a huge bank account balance a few years into the recession, or 3) the amount of economic activity in the article is overstated. ^

[4] As long as inflation is happening like it should be, there won’t be protracted unemployment, because a slight decline in economic activity is quickly counteracted by a slightly decreased value of money (from the inflation). Note the word “nominal” up there. People are subject to something called a “money illusion”. They think in terms of prices and salaries expressed in dollar values, not in purchasing power values.

There was only a very brief recession after the dot com crash because it did nothing to affect the money supply. Inflation happened as expected and everything quickly corrected to almost full employment. On the other hand, the Great Depression lasted as long as it did because most countries were reluctant to leave the gold standard and so saw very little inflation. ^

[5] Here’s an interesting exercise. Look at this graph of US yearly inflation. Notice how inflation is noticeably higher in the years immediately preceding the Great Recession than it is in the years afterwards. Monetarist economists believe that the recession wouldn’t have lasted as long if it there hadn’t been such a long period of relatively low inflation.

As always, I’m a huge fan of the total lack of copyright on anything produced by the US government.

^

[6] You might wonder if there’s some benefit to both. The answer, unfortunately, is no. Doubling them up should be roughly equivalent to just having higher inflation. There seems to be a natural rate of inflation that does a good job balancing people’s expectations for pay raises (and adequately reduces real wages in a recession) with the convenience of having stable money. Pushing inflation beyond this point can lead to a temporary increase in employment, by making labour relatively cheaper compared to other inputs.

The increase in employment ends when people adjust their expectations for raises to the new inflation rate and begin demanding increased salaries. Labour is no longer artificially cheap in real terms, so companies lay off some of the extra workers. You end up back where you started, but with inflation higher than it needs to be.

See also: “The Importance of Stable Money: Theory and Evidence” by Michael Bordo and Anna Schwartz. ^

[7] I suspect that if the stamp scrip currency had been allowed to go on for another decade or so, it would have had some sort of amusing monetary crisis. ^

Economics, Politics

Good Intentions Meet A Messy Reality In Elizabeth Warren’s Corporate Citizenship Push

[Epistemic Status: I am not an economist. I am fairly confident in my qualitative assessment, but there could be things I’ve overlooked.]

Vox has an interesting article on Elizabeth Warren’s newest economic reform proposal. Briefly, she wants to force corporations with more than $1 billion in revenue to apply for a charter of corporate citizenship.

This charter would make three far-reaching changes to how large companies do business. First, it would require businesses to consider customers, employees, and the community – instead of only its shareholders – when making decisions. Second, it would require that 40% of the seats on the board go to workers. Third, it would require 75% of shareholders and board members to authorize any corporate political activity.

(There’s also some minor corporate governance stuff around limiting the ability of CEOs to sell their stock which I think is an idea that everyone should be strongly behind, although I’d bet many CEOs might beg to differ.)

Vox characterizes this as Warren’s plan to “save capitalism”. The idea is that it would force companies to do more to look out for their workers and less to cater to short term profit maximization for Wall Street [1]. Vox suggests that it would also result in a loss of about 25% of the value of the American stock market, which they characterize as no problem for the “vast majority” of people who rely on work, rather than the stock market, for income (more on that later).

Other supposed benefits of this plan include greater corporate respect for the environment, more innovation, less corporate political meddling, and a greater say for workers in their jobs. The whole 25% decrease in the value of the stock market can also be spun as a good thing, depending on your opinions on wealth destruction and wealth inequality.

I think Vox was too uncritical in its praise of Warren’s new plan. There are some good aspects of it – it’s not a uniformly terrible piece of legislation – but I think once of a full accounting of the bad, the good, and the ugly is undertaken, it becomes obvious that it’s really good that this plan will never pass congress.

The Bad

I can see one way how this plan might affect normal workers – decreased purchasing power.

As I’ve previously explained when talking about trade, many countries will sell goods to America without expecting any goods in return. Instead, they take the American dollars they get from the sale and invest them right back in America. Colloquially, we call this the “trade deficit”, but it really isn’t a deficit at all. It’s (for many people) a really sweet deal.

Anything that makes American finance more profitable (like say a corporate tax cut) is liable to increase this effect, with the long-run consequence of making the US dollar more valuable and imports cheaper [2].

It’s these cheap imports that have enabled the incredibly wealthy North American lifestyle [3]. Spend some time visiting middle class and wealthy people in Europe and you’ll quickly realize that everything is smaller and cheaper there. Wealthy Europeans own cars, houses, kitchen appliances and TVs that are all much more modest than what even middle class North Americans are used to.

Weakening shareholder rights and slashing the value of the stock market would make the American financial market generally less attractive. This would (especially if combined with Trump or Sanders style tariffs) lead to increased domestic inflation in the United States – inflation that would specifically target goods that have been getting cheaper as long as anyone can remember.

This is just since 1997! Very basic colour TVs cost more than $1000 (unadjusted for inflation) when first introduced. Today I would expect to pay less than $600 for a 40″ 4k TV.

 

This is hard to talk about to Warren supporters as a downside, because many of them believe that we need to learn to make do with less – a position that is most common among a progressive class that conspicuously consumes experiences, not material goods [4]. Suffice to say that many North Americans still derive pleasure and self-worth from the consumer goods they acquire and that making these goods more expensive is likely to cause a politically expensive backlash, of the sort that America has recently become acquainted with and progressive America terrified of.

(There’s of course also the fact that making appliances and cars more expensive would be devastating to anyone experiencing poverty in America.)

Inflation, when used for purposes like this one, is considered an implicit tax by economists. It’s a way for the government to take money from people without the accountability (read: losing re-election) that often comes with tax hikes. Therefore, it is disingenuous to claim that this plan is free, or involves no new taxes. The taxes are hidden, is all.

There are two other problems I see straight away with this plan.

The first is that it will probably have no real impact on how corporations contribute to the political process.

The Vox article echoes a common progressive complaint, that corporate contributions to politics are based on CEO class solidarity, made solely for the benefit of the moneyed elites. I think this model is inaccurate.

It is certainly true that very wealthy individuals contribute to political campaigns in the hopes that this will lead to less taxes for them. But this isn’t really how corporations contribute. I’ve written in detail about this before, but corporations normally focus their political contributions to create opportunities for rent-seeking – that is to say, trying to get the government to give them an unfair advantage they can take all the way to the bank.

From a shareholder value model, this makes sense. Lower corporate tax rates might benefit a company, but they really benefit all companies equally. They aren’t going to do much to increase the value of any one stock relative to any other (so CEOs can’t make claims of “beating the market”). Anti-competitive laws, implicit subsidies, or even blatant government aid, on the other hand, are highly localized to specific companies (and so make the CEO look good when profits increase).

When subsidies are impossible, companies can still try and stymie legislation that would hurt their business.

This was the goal of the infamous Lawyers In Cages ad. It was run by an alliance of fast food chains and meat producers, with the goal of drying up donations to the SPCA, which had been running very successful advocacy campaigns that threatened to lead to improved animal cruelty laws, laws that would probably be used against the incredibly inhumane practice of factory farming and thereby hurt industry profits.

Here’s the thing: if you’re one of the worker representatives on the board at one of these companies, you’re probably going to approve political spending that is all about protecting the company.

The market can be a rough place and when companies get squeezed, workers do suffer. If the CEO tells you that doing some political spending will land you allies in congress who will pass laws that will protect your job and increase your paycheck, are you really going to be against it [5]?

The ugly fact is that when it comes to rent-seeking and regulation, the goals of employees are often aligned with the goals of employers. This obviously isn’t true when the laws are about the employees (think minimum wage), but I think this isn’t what companies are breaking the bank lobbying for.

The second problem is that having managers with divided goals tends to go poorly for everyone who isn’t the managers.

Being upper management in a company is a position that provides great temptations. You have access to lots of money and you don’t have that many people looking over your shoulder. A relentless focus on profit does have some negative consequences, but it also keeps your managers on task. Profit represents an easy way to hold a yardstick to management performance. When profit is low, you can infer that your managers are either incompetent, or corrupt. Then you can fire them and get better ones.

Writing in Filthy Lucre, leftist academic Joseph Heath explains how the sort of socially-conscious enterprise Warren envisions has failed before:

The problem with organizations that are owned by multiple interest groups (or “principals”) is that they are often less effective at imposing discipline upon managers, and so suffer from higher agency costs. In particular, managers perform best when given a single task, along with a single criterion for the measurement of success. Anything more complicated makes accountability extremely difficult. A manager told to achieve several conflicting objectives can easily explain away the failure to meet one as a consequence of having pursued some other. This makes it impossible for the principals to lay down any unambiguous performance criteria for the evaluation of management, which in turn leads to very serious agency problems.

In the decades immediately following the Second World War, many firms in Western Europe were either nationalized or created under state ownership, not because of natural monopoly or market failure in the private sector, but out of a desire on the part of governments to have these enterprises serve the broader public interest… The reason that the state was involved in these sectors followed primarily from the thought that, while privately owned firms pursued strictly private interests, public ownership would be able to ensure that these enterprises served the public interest. Thus managers in these firms were instructed not just to provide a reasonable return on the capital invested, but to pursue other, “social” objectives, such as maintaining employment or promoting regional development.

But something strange happened on the road to democratic socialism. Not only did many of these corporations fail to promote the public interest in any meaningful way, many of them did a worse job than regulated firms in the private sector. In France, state oil companies freely speculated against the national currency, refused to suspend deliveries to foreign customers in times of shortage, and engaged in predatory pricing. In the United States, state-owned firms have been among the most vociferous opponents of enhanced pollution controls, and state-owned nuclear reactors are among the least safe. Of course, these are rather dramatic examples. The more common problem was simply that these companies lost staggering amounts of money. The losses were enough, in several cases, to push states like France to the brink of insolvency, and to prompt currency devaluations. The reason that so much money was lost has a lot to do with a lack of accountability.

Heath goes on to explain that basically all governments were forced to abandon these extra goals long before the privatizations on the ’80s. Centre-left or centre-right, no government could tolerate the shit-show that companies with competing goals became.

This is the kind of thing Warren’s plan would bring back. We’d once again be facing managers with split priorities who would plow money into vanity projects, office politics, and their own compensation while using the difficulty of meeting all of the goals in Warren’s charter as a reason to escape shareholder lawsuits. It’s possible that this cover for incompetence could, in the long run, damage stock prices much more than any other change presented in the plan.

I also have two minor quibbles that I believe are adequately covered elsewhere, but that I want to include for completeness. First, I think this plan is inefficient at controlling executive pay compared to bracketed scaled payroll taxes. Second, if share buybacks were just a short-term profit scheme, they would always backfire. If they’re being done, it’s probably for rational reasons.

The Good

The shift in comparative advantage that this plan would precipitate within the American economy won’t come without benefits. Just as Trump’s corporate tax cut makes American finance relatively more appealing and will likely lead to increased manufacturing job losses, a reduction in deeply discounted goods from China will likely lead to job losses in finance and job gains in manufacturing.

This would necessarily have some effect on income inequality in the United States, entirely separate from the large effect on wealth inequality that any reduction in the stock market would spur. You see, finance jobs tend to be very highly paid and go to people with relatively high levels of education (the sorts of people who probably could go do something else if their sector sees problems). Manufacturing jobs, on the other hand, pay decently well and tend to go to people with much less education (and also with correspondingly fewer options).

This all shakes out to an increase in middle class wages and a decrease in the wages of the already rich [6].

(Isn’t it amusing that Warren is the only US politician with a credible plan to bring back manufacturing jobs, but doesn’t know to advertise it as such?)

As I mentioned above, we would also see fewer attacks on labour laws and organized labour spearheaded by companies. I’ll include this as a positive, although I wonder if these attacks would really stop if deprived of corporate money. I suspect that the owners of corporations would keep them up themselves.

I must also point out that Warren’s plan would certainly be helpful when it comes to environmental protection. Having environmental protection responsibilities laid out as just as important as fiduciary duty would probably make it easy for private citizens and pressure groups to take enforcement of environmental rules into their own hands via the courts, even when their state EPA is slow out of the gate. This would be a real boon to environmental groups in conservative states and probably bring some amount of uniformity to environmental protection efforts.

The Ugly

Everyone always forgets the pensions.

The 30 largest public pensions in the United States have, according to Wikipedia, a combined value of almost $3 trillion, an amount equivalent to almost 4% of all outstanding stocks in the world or 10% of the outstanding stocks in America.

Looking at the expected yields on these funds makes it pretty clear that they’re invested in the stock market (or something similarly risky [7]). You don’t get 7.5% yearly yields from buying Treasury Bills.

Assuming the 25% decrease in nominal value given in the article is true (I suspect the change in real value would be higher), Warren’s plan would create a pension shortfall of $750 billion – or about 18% of the current US Federal Budget. And that’s just the hit to the 30 largest public-sector pensions. Throw in private sector pensions and smaller pensions and it isn’t an exaggeration to say that this plan could cost pensions more than a trillion dollars.

This shortfall needs to be made up somehow – either delayed retirement, taxpayer bailouts, or cuts to benefits. Any of these will be expensive, unpopular, and easy to track back to Warren’s proposal.

Furthermore, these plans are already in trouble. I calculated the average funding ratio at 78%, meaning that there’s already 22% less money in these pensions than there needs to be to pay out benefits. A 25% haircut would bring the pensions down to about 60% funded. We aren’t talking a small or unnoticeable potential cut to benefits here. Warren’s plan requires ordinary people relying on their pensions to suffer, or it requires a large taxpayer outlay (which, you might remember, it is supposed to avoid).

This isn’t even getting into the dreadfully underfunded world of municipal pensions, which are appallingly managed and chronically underfunded. If there’s a massive unfunded liability in state pensions caused by federal action, you can bet that the Feds will leave it to the states to sort it out.

And if the states sort it out rather than ignoring it, you can bet that one of the first things they’ll do is cut transfers to municipalities to compensate.

This seems to be how budget cuts always go. It’s unpopular to cut any specific program, so instead you cut your transfers to other layers of governments. You get lauded for balancing the books and they get to decide what to cut. The federal government does this to states, states do it to cities, and cities… cities are on their own.

In a worst-case scenario, Warren’s plan could create unfunded pension liabilities that states feel compelled to plug, paid for by shafting the cities. Cities will then face a double whammy: their own pension liabilities will put them in a deep hole. A drastic reduction in state funding will bury them. City pensions will be wiped out and many cities will go bankrupt. Essential services, like fire-fighting, may be impossible to provide. It would be a disaster.

The best-case scenario, of course, is just that a bunch of retirees see a huge chunk of their income disappear.

It is easy to hate on shareholder protection when you think it only benefits the rich. But that just isn’t the case. It also benefits anyone with a pension. Your pension, possibly underfunded and a bit terrified of that fact, is one of the actors pushing CEOs to make as much money as possible. It has to if you’re to retire someday.

Vox is ultimately wrong about how affected ordinary people are when the stock market declines and because of this, their enthusiasm for this plan is deeply misplaced.

Footnotes

[1] To some extent, Warren’s plan starts out much less appeal if you (like me) don’t have “Wall Street is too focused on the short term” as a foundational assumption.

I am very skeptical of claims that Wall Street is too short-term focused. Matt Levine gives an excellent run-down of why you should be skeptical as well. The very brief version is that complaints about short-termism normally come from CEOs and it’s maybe a bad idea to agree with them when they claim that everything will be fine if we monitor them less. ^

[2] I’d love to show this in chart form, but in real life the American dollar is also influenced by things like nuclear war worries and trade war realities. Any increase in the value of the USD caused by the GOP tax cut has been drowned out by these other factors. ^

[3] Canada benefits from a similar effect, because we also have a very good financial system with strong property rights and low corporate taxes. ^

[4] They also tend to leave international flights out of lists of things that we need to stop if we’re going to handle climate change, but that’s a rant for another day. ^

[5] I largely think that Marxist style class solidarity is a pleasant fiction. To take just one example, someone working a minimum wage grocery store job is just as much a member of the “working class” as a dairy farmer. But when it comes to supply management, a policy that restriction competition and artificially increases the prices of eggs and dairy, these two individuals have vastly different interests. Many issues are about distribution of resources, prestige, or respect within a class and these issues make reasoning that assumes class solidarity likely to fail. ^

[6] These goals could, of course, be accomplished with tax policy, but this is America we’re talking about. You can never get the effect you want in America simply by legislating for it. Instead you need to set up a Rube Goldberg machine and pray for the best. ^

[7] Any decline in stocks should cause a similar decline in return on bonds over the long term, because bond yields fall when stocks fall. There’s a set amount of money out there being invested. When one investment becomes unavailable or less attractive, similarly investments are substituted. If the first investment is big enough, this creates an excess of demand, which allows the seller to get better terms. ^

Economics, Model

You Shouldn’t Believe In Technological Unemployment Without Believing In Killer AI

[Epistemic Status: Open to being convinced otherwise, but fairly confident. 11 minute read.]

As interest in how artificial intelligence will change society increases, I’ve found it revealing to note what narratives people have about the future.

Some, like the folks at MIRI and OpenAI, are deeply worried that unsafe artificial general intelligences – an artificial intelligence that can accomplish anything a person can – represent an existential threat to humankind. Others scoff at this, insisting that these are just the fever dreams of tech bros. The same news organizations that bash any talk of unsafe AI tend to believe that the real danger lies in robots taking our jobs.

Let’s express these two beliefs as separate propositions:

  1. It is very unlikely that AI and AGI will pose an existential risk to human society.
  2. It is very likely that AI and AGI will result in widespread unemployment.

Can you spot the contradiction between these two statements? In the common imagination, it would require an AI that can approximate human capabilities to drive significant unemployment. Given that humans are the largest existential risk to other humans (think thermonuclear war and climate change), how could equally intelligent and capable beings, bound to subservience, not present a threat?

People who’ve read a lot about AI or the labour market are probably shaking their head right now. This explanation for the contradiction, while evocative, is a strawman. I do believe that at most one (and possibly neither) of those propositions I listed above are true and the organizations peddling both cannot be trusted. But the reasoning is a bit more complicated than the standard line.

First, economics and history tell us that we shouldn’t be very worried about technological unemployment. There is a fallacy called “the lump of labour”, which describes the common belief that there is a fixed amount of labour in the world, with mechanical aide cutting down the amount of labour available to humans and leading to unemployment.

That this idea is a fallacy is evidenced by the fact that we’ve automated the crap out of everything since the start of the industrial revolution, yet the US unemployment rate is 3.9%. The unemployment rate hasn’t been this low since the height of the Dot-com boom, despite 18 years of increasingly sophisticated automation. Writing five years ago, when the unemployment rate was still elevated, Eliezer Yudkowsky claimed that slow NGDP growth a more likely culprit for the slow recovery from the great recession than automation.

With the information we have today, we can see that he was exactly right. The US has had steady NGDP growth without any sudden downward spikes since mid-2014. This has corresponded to a constantly improving unemployment rate (it will obviously stop improving at some point, but if history is any guide, this will be because of a trade war or banking crisis, not automation). This improvement in the unemployment rate has occurred even as more and more industrial robots come online, the opposite of what we’d see if robots harmed job growth.

I hope this presents a compelling empirical case that the current level (and trend) of automation isn’t enough to cause widespread unemployment. The theoretical case comes from the work of David Ricardo, a 19th century British economist.

Ricardo did a lot of work in the early economics of trade, where he came up with the theory of comparative advantage. I’m going to use his original framing which applies to trade, but I should note that it actually applies to any exchange where people specialize. You could just as easily replace the examples with “shoveled driveways” and “raked lawns” and treat it as an exchange between neighbours, or “derivatives” and “software” and treat it as an exchange between firms.

The original example is rather older though, so it uses England and its close ally Portugal as the cast and wine and cloth as the goods. It goes like this: imagine that world economy is reduced to two countries (England and Portugal) and each produce two goods (wine and cloth). Portugal is uniformly more productive.

Hours of work to produce
Cloth Wine
England 100 120
Portugal 90 80

Let’s assume people want cloth and wine in equal amounts and everyone currently consumes one unit per month. This means that the people of Portugal need to work 170 hours each month to meet their consumption needs and the people of England need to work 220 hours per month to meet their consumption needs.

(This example has the added benefit of showing another reason we shouldn’t fear productivity. England requires more hours of work each month, but in this example, that doesn’t mean less unemployment. It just means that the English need to spend more time at work than the Portuguese. The Portuguese have more time to cook and spend time with family and play soccer and do whatever else they want.)

If both countries traded with each other, treating cloth and wine as valuable in relation to how long they take to create (within that country) something interesting happens. You might think that Portugal makes a killing, because it is better at producing things. But in reality, both countries benefit roughly equally as long as they trade optimally.

What does an optimal trade look like? Well, England will focus on creating cloth and it will trade each unit of cloth it produces to Portugal for 9/8 barrels of wine, while Portugal will focus on creating wine and will trade this wine to England for 6/5 units of cloth. To meet the total demand for cloth, the English need to work 200 hours. To meet the total demand for wine, the Portuguese will have to work for 160 hours. Both countries now have more free time.

Perhaps workers in both countries are paid hourly wages, or perhaps they get bored of fun quickly. They could also continue to work the same number of hours, which would result in an extra 0.2 units of cloth and an extra 0.125 units of wine.

This surplus could be stored up against a future need. Or it could be that people only consumed one unit of cloth and one unit of wine each because of the scarcity in those resources. Add some more production in each and perhaps people will want more blankets and more drunkenness.

What happens if there is no shortage? If people don’t really want any more wine or any more cloth (at least at the prices they’re being sold at) and the producers don’t want goods piling up, this means prices will have to fall until every piece of cloth and barrel of wine is sold (when the price drops so that this happens, we’ve found the market clearing price).

If there is a downward movement in price and if workers don’t want to cut back their hours or take a pay cut (note that because cloth and wine will necessarily be cheaper, this will only be a nominal pay cut; the amount of cloth and wine the workers can purchase will necessarily remain unchanged) and if all other costs of production are totally fixed, then it does indeed look like some workers will be fired (or have their hours cut).

So how is this an argument against unemployment again?

Well, here the simplicity of the model starts to work against us. When there are only two goods and people don’t really want more of either, it will be hard for anyone laid off to find new work. But in the real world, there are an almost infinite number of things you can sell to people, matched only by our boundless appetite for consumption.

To give just one trivial example, an oversupply of cloth and falling prices means that tailors can begin to do bolder and bolder experiments, perhaps driving more demand for fancy clothes. Some of the cloth makers can get into this market as tailors and replace their lost jobs.

(When we talk about the need for less employees, we assume the least productive employees will be fired. But I’m not sure if that’s correct. What if instead, the most productive or most potentially productive employees leave for greener pastures?)

Automation making some jobs vastly more efficient functions similarly. Jobs are displaced, not lost. Even when whole industries dry up, there’s little to suggest that we’re running out of jobs people can do. One hundred years ago, anyone who could afford to pay a full-time staff had one. Today, only the wealthiest do. There’s one whole field that could employ thousands or millions of people, if automation pushed on jobs such that this sector was one of the places humans had very high comparative advantage.

This points to what might be a trend: as automation makes many things cheaper and (for some people) easier, there will be many who long for a human touch (would you want the local funeral director’s job to be automated, even if it was far cheaper?). Just because computers do many tasks cheaper or with fewer errors doesn’t necessarily mean that all (or even most) people will rather have those tasks performed by computers.

No matter how you manipulate the numbers I gave for England and Portugal, you’ll still find a net decrease in total hours worked if both countries trade based on their comparative advantage. Let’s demonstrate by comparing England to a hypothetical hyper-efficient country called “Automatia”

Hours of work to produce
Cloth Wine
England 100 120
Automatia 2 1

Automatia is 50 times as efficient at England when it comes to producing cloth and 120 times as efficient when it comes to producing wine. Its citizens need to spend 3 hours tending the machines to get one unit of each, compared to the 220 hours the English need to toil.

If they trade with each other, with England focusing on cloth and Automatia focusing on wine, then there will still be a drop of 21 hours of labour-time. England will save 20 hours by shifting production from wine to cloth, and Automatia will save one hour by switching production from cloth to wine.

Interestingly, Automatia saved a greater percentage of its time than either Portugal or England did, even though Automatia is vastly more efficient. This shows something interesting in the underlying math. The percent of their time a person or organization saves engaging in trade isn’t related to any ratio in production speeds between it and others. Instead, it’s solely determined by the productivity ratio between its most productive tasks and its least productive ones.

Now, we can’t always reason in percentages. At a certain point, people expect to get the things they paid for, which can make manufacturing times actually matter (just ask anyone whose had to wait for a Kickstarter project which was scheduled to deliver in February – right when almost all manufacturing in China stops for the Chinese New Year and the unprepared see their schedules slip). When we’re reasoning in absolute numbers, we can see that the absolute amount of time saved does scale with the difference in efficiency between the two traders. Here, 21 hours were saved, 35% fewer than the 30 hours England and Portugal saved.

When you’re already more efficient, there’s less time for you to save.

This decrease in saved time did not hit our market participants evenly. England saved just as much time as it would trading with Portugal (which shows that the change in hours worked within a country or by an individual is entirely determined by the labour difference between low-advantage and high-advantage domestic sectors), while the more advanced participant (Automatia) saved 9 fewer hours than Portugal.

All of this is to say: if real live people are expecting real live goods and services with a time limit, it might be possible for humans to displaced in almost all sectors by automation. Here, human labour would become entirely ineligible for many tasks or the bar to human entry would exclude almost all. For this to happen, AI would have to be vastly more productive than us in almost every sector of the economy and humans would have to prefer this productivity or other ancillary benefits of AI over any value that a human could bring to the transaction (like kindness, legal accountability, or status).

This would definitely be a scary situation, because it would imply AI systems that are vastly more capable than any human. Given that this is well beyond our current level of technology and that Moore’s law, which has previously been instrumental in technological progress is drying up, we would almost certainly need to use weaker AI to design these sorts of systems. There’s no evidence that merely human performance in automating jobs will get us anywhere close to such a point.

If we’re dealing with recursively self-improving artificial agents, the risks is less “they will get bored of their slave labour and throw off the yoke of human oppression” and more “AI will be narrowly focused on optimizing for a specific task and will get better and better at optimizing for this task to the point that we will all by killed when they turn the world into a paperclip factory“.

There are two reasons AI might kill us as part of their optimisation process. The first is that we could be a threat. Any hyper-intelligent AI monomaniacally focused on a goal could realize that humans might fear and attack it (or modify it to have different goals, which it would have to resist, given that a change in goals would conflict with its current goals) and decide to launch a pre-emptive strike. The second reason is that such an AI could wish to change the world’s biosphere or land usage in such a way as would be inimical to human life. If all non-marginal land was replaced by widget factories and we were relegated to the poles, we would all die, even if no ill will was intended.

It isn’t enough to just claim that any sufficiently advanced AI would understand human values. How is this supposed to happen? Even humans can’t enumerate human values and explain them particularly well, let alone express them in the sort of decision matrix or reinforcement environment that we currently use to create AI. It is not necessarily impossible to teach an AI human values, but all evidence suggests it will be very very difficult. If we ignore this challenge in favour of blind optimization, we may someday find ourselves converted to paperclips.

It is of course perfectly acceptable to believe that AI will never advance to the point where that becomes possible. Maybe you believe that AI gains have been solely driven by Moore’s Law, or that true artificial intelligence. I’m not sure this viewpoint isn’t correct.

But if AI will never be smart enough to threaten us, then I believe the math should work out such that it is impossible for AI to do everything we currently do or can ever do better than us. Absent such overpoweringly advanced AI, the Ricardo comparative advantage principles should continue to hold true and we should continue to see technological unemployment remain a monster under the bed: frequently fretted about, but never actually seen.

This is why I believe those two propositions I introduced way back at the start can’t both be true and why I feel like the burden of proof is on anyone believing in both to explain why they believe that economics have suddenly stopped working.

Coda: Inequality

A related criticism of improving AI is that it could lead to ever increasing inequality. If AI drives ever increasing profits, we should expect an increasing share of these to go to the people who control AI, which presumably will be people already rich, given that the development and deployment of AI is capital intensive.

There are three reasons why I think this is a bad argument.

First, profits are a signal. When entrepreneurs see high profits in an industry, they are drawn to it. If AI leads to high profits, we should see robust competition until those profits are no higher than in any other industry. The only thing that can stop this is government regulation that prevents new entrants from grabbing profit from the incumbents. This would certainly be a problem, but it wouldn’t be a problem with AI per se.

Second, I’m increasingly of the belief that inequality in the US is rising partially because the Fed’s current low inflation regime depresses real wage growth. Whether because of fear of future wage shocks, or some other effect, monetary history suggests that higher inflation somewhat consistently leads to high wage growth, even after accounting for that inflation.

Third, I believe that inequality is a political problem amiable to political solutions. If the rich are getting too rich in a way that is leading to bad social outcomes, we can just tax them more. I’d prefer we do this by making conspicuous consumption more expensive, but really, there are a lot of ways to tax people and I don’t see any reason why we couldn’t figure out a way to redistribute some amount of wealth if inequality gets worse and worse.

(By the way, rising income inequality is largely confined to America; most other developed countries lack a clear and sustained upwards trend. This suggests that we should look to something unique to America, like a pathologically broken political system to explain why income inequality is rising there.

There is also separately a perception of increasing inequality of outcomes among young people world-wide as rent-seeking makes goods they don’t already own increase in price more quickly than goods they do own. Conflating these two problems can make it seem that countries like Canada are seeing a rise in income inequality when they in fact are not.)

Economics, Model

The Biggest Tech Innovation is Selling Club Goods

Economists normally splits goods into four categories:

  • Public goods are non-excludable (so anyone can access them) and non-rival (I can use them as much as I want without limiting the amount you can use them). Broadcast television, national defense, and air are all public goods.
  • Common-pool resources are non-excludable but rival (if I use them, you will have to make do with less). Iron ore, fish stocks, and grazing land are all common pool resources.
  • Private goods are excludable (their access is controlled or limited by pricing or other methods) and rival. My clothes, computer, and the parking space I have in my lease but never use are all private goods.
  • Club goods are excludable but (up to a certain point) non-rival. Think of the swimming pool in an apartment building, a large amusement park, or cellular service.

Club goods are perhaps the most interesting class of goods, because they blend properties of the three better understood classes. They aren’t open to all, but they are shared among many. They can be overwhelmed by congestion, but up until that point, it doesn’t really matter how many people are using them. Think of a gym; as long as there’s at least one free machine of every type, it’s no less convenient than your home.

Club goods offer cost savings over private goods, because you don’t have to buy something that mostly sits unused (again, think of gym equipment). People other than you can use it when it would otherwise sit around and those people can help you pay the cost. It’s for this reason that club goods represent an excellent opportunity for the right entrepreneur to turn a profit.

I currently divide tech start-ups into three classes. There are the Googles of the world, who use network effects or big data to sell advertising more effectively. There are companies like the one I work for that take advantage of modern technology to do things that were never possible before. And then there are those that are slowly and inexorably turning private goods into club goods.

I think this last group of companies (which include Netflix, Spotify, Uber, Lyft, and Airbnb) may be the ones that ultimately have the biggest impact on how we order our lives and what we buy. To better understand how these companies are driving this transformation, let’s go through them one by one, then talk about what it could all mean.

Netflix

When I was a child, my parents bought a video cassette player, then a DVD player, then a Blu-ray player. We owned a hundred or so video cassettes, mostly whatever movies my brother and I were obsessed with enough to want to own. Later, we found a video rental store we liked and mostly started renting movies. We never owned more than 30 DVDs and 20 Blu-rays.

Then I moved out. I have bought five DVDs since – they came as a set from Kickstarter. Anything else I wanted to watch, I got via Netflix. A few years later, the local video rental store closed down and my parents got an AppleTV and a Netflix of their own.

Buying a physical movie means buying a private good. Video rental stores can be accurately modeled as a type of club good, because even if the movie you want is already rented out, there’s probably one that you want to watch almost as much that is available. This is enough to make them approximately non-rival, while the fact that it isn’t free to rent a movie means that rented videos are definitely excludable.

Netflix represents the next evolution in this business model. As long as the Netflix engineers have done their job right, there’s no amount of watching movies I can do that will prevent you from watching movies. The service is almost truly non-rival.

Movie studios might not feel the effects of Netflix turning a large chunk of the market for movies into one focused on club goods; they’ll still get paid by Netflix. But the switch to Netflix must have been incredibly damaging for the physical media and player manufacturers. When everyone went from cassettes to DVDs or DVDs to Blu-rays, there was still a market for their wares. Now, that market is slowly and inexorably disappearing.

This isn’t just a consequence of technology. The club good business model offers such amazing cost savings that it drove a change in which technology was dominant. When you bought a movie, it would spend almost all of its life sitting on a shelf. Now Netflix acts as your agent, buying movies (or rather, their rights) and distributing such that they’re always being played and almost never sitting on the shelf.

Spotify

Spotify is very similar to Netflix. Previously, people bought physical cassettes (I’m just old enough that I remember making mix tapes from the radio). Then they switched to CDs. Then it was MP3s bought online (or, almost more likely, pirated online). But even pirating music is falling out of favour these days. Apple, Google, Amazon, and Spotify are all competing to offer unlimited music streaming to customers.

Music differs from movies in that it has a long tradition of being a public good – via broadcast radio. While that hasn’t changed yet (radio is still going strong), I do wonder how much longer the public option for music will exist, especially given the trend away from private cars that I think companies like Uber and Lyft are going to (pardon the pun) drive.

Uber and Lyft

I recently thought about buying a car. I was looking at the all-electric Kia Soul, which has a huge government rebate (for a little while yet) and financing terms that equate to negative real interest. Despite all these advantages, it turns out that when you sit down and run the numbers, it would still be cheaper for me to use Uber and Lyft to get everywhere.

We are starting to see the first, preliminary (and possible illusionary) evidence that Uber and Lyft are causing the public to change their preference away from owning cars.

A car you’ve bought is a private good, while Uber and Lyft are clearly club goods. Surge pricing means that there are basically always enough drivers for everyone who wants to go anywhere using the system.

When you buy a car, you’re signing up for it to sit around useless for almost all of its life. This is similar to what happens when you buy exercise equipment, which means the logic behind cars as a club good is just as compelling as the logic behind gyms. Previously, we hadn’t been able to share cars very efficiently because of technological limitations. Dispatching a taxi, especially to an area outside of a city centre, was always spotty, time consuming and confusing. Car-pooling to work was inconvenient.

As anyone who has used a modern ride-sharing app can tell you, inconvenient is no longer an apt descriptor.

There is a floor on how few cars we can get by on. To avoid congestion in a club good, you typically have to provision for peak load. Luckily, peak load (for anything that can sensibly be turned into a club good) always requires fewer resources than would be needed if everyone went out and bought the shared good themselves.

Even “just” substantially decreasing the absolute number of cars out there will be incredibly disruptive to the automotive sector if they don’t correctly predict the changing demand for their products.

It’s also true that increasing the average utilisation of cars could change how our cities look. Parking lots are necessary when cars are a private good, but are much less useful when they become club goods. It is my hope that malls built in the middle of giant parking moats look mighty silly in twenty years.

Airbnb

Airbnb is the most ambiguous example I have here. As originally conceived, it would have driven the exact same club good transformation as the other services listed. People who were on vacation or otherwise out of town would rent out their houses to strangers, increasing the utilisation of housing and reducing the need for dedicated hotels to be built.

Airbnb is sometimes used in this fashion. It’s also used to rent out extra rooms in an otherwise occupied house, which accomplishes almost the same thing.

But some amount of Airbnb usage is clearly taking place in houses or condos that otherwise would have been rental stock. When used in this way, it’s taking advantage of a regulatory grey zone to undercut hotel pricing. Insofar as this might result in a longer-term change towards regulations that are generally cheaper to comply with, this will be good for consumers, but it won’t really be transformational.

The great promise of club goods is that they might lead us to use less physical stuff overall, because where previously each person would buy one of a thing, now only enough units must be purchased to satisfy peak demand. If Airbnb is just shifting around where people are temporary residents, then it won’t be an example of the broader benefits of club goods (even if provides other benefits to its customers).

When Club Goods Eat The Economy

In every case (except potentially Airbnb) above, I’ve outlined how the switch from private goods to club goods is resulting in less consumption. For music and movies, it is unclear if this switch is what is providing the primary benefit. My intuition is that the club good model actually did change consumption patterns for physical copies of movies (because my impression is that few people ever did online video rentals via e.g. iTunes), whereas the MP3 revolution was what really shrunk the footprint of music media.

This switch in consumption patterns and corresponding decrease in the amount of consumption that is necessary to satisfy preferences is being primarily driven by a revolution in logistics and bandwidth. The price of club goods has always compared favourably with that of private goods. The only thing holding people back was inconvenience. Now programmers are steadily figuring out how to make that inconvenience disappear.

On the other hand, increased bandwidth has made it easier to turn any sort of digitizable media into a club good. There’s an old expression among programmers: never underestimate the bandwidth of a station wagon full of cassettes (or CDs, or DVDs, or whatever physical storage media one grew up with) hurtling down the highway. For a long time, the only way to get a 1GB movie to a customer without an appallingly long buffering period was to physically ship it (on a 56kbit/s connection, this movie would take one day and fifteen hours to download, while the aforementioned station wagon with 500 movies would take 118 weeks to download).

Change may start out slow, but I expect to see it accelerate quickly. My generation is the first to have had the internet from a very young age. The generation after us will be the first unable to remember a time before it. We trust apps like Uber and Airbnb much more than our parents, and our younger siblings trust them even more than us.

While it was only kids who trusted the internet, these new club good businesses couldn’t really affect overall economic trends. But as we come of age and start to make major economic decisions, like buying houses and cars, our natural tendency to turn towards the big tech companies and the club goods they peddle will have ripple effects on an economy that may not be prepared for it.

When that happens, there’s only one thing that is certain: there will be yet another deluge of newspaper columns talking about how millennials are destroying everything.

Economics, Politics, Quick Fix

Why Linking The Minimum Wage To Inflation Can Backfire

Last week I explained how poor decisions by central bankers (specifically failing to spur inflation) can make recessions much worse and lead to slower wage growth during recovery.

(Briefly: inflation during recessions reduces the real cost of payroll, cutting business expenses and making firing people unnecessary. During a recovery, it makes hiring new workers cheaper and so leads to more being hired. Because central bankers failed to create inflation during and after the great recession, many businesses are scared of raising salaries. They believe (correctly) that this will increase their payroll expenses to the point where they’ll have to lay many people off if another recession strikes. Until memories of the last recession fade or central bankers clean up their act, we shouldn’t expect wages to rise.)

Now I’d like to expand on an offhand comment I made about the minimum wage last week and explore how it can affect recovery, especially if it’s indexed to inflation.

The minimum wage represents a special case when it comes to pay cuts and layoffs in recessions. While it’s always theoretically possible to convince people to take a pay cut rather than a layoff (although in practice it’s mostly impossible), this option isn’t available for people who make the minimum wage. It’s illegal to pay them anything less. If bad times strike and business is imperiled, people making the minimum wage might have to be laid off.

I say “might”, because when central bankers aren’t proving useless, inflation can rescue people making the minimum wage from being let go. Inflation makes the minimum wage relatively less valuable, which reduces the cost of payroll relative to other inputs and helps to save jobs that pay minimum wage. This should sound familiar, because inflation helps people making the minimum wage in the exact same way it helps everyone else.

Because of increasingly expensive housing and persistently slow wage growth, some jurisdictions are experimenting with indexing the minimum wage to inflation. This means that the minimum wage rises at the same rate as the cost of living. Most notably (to me, at least), this group includes my home province of Ontario.

I think decreasing purchasing power is a serious problem (especially because of its complicated intergenerational dynamics), but I think this is one of the worst possible ways to deal with it.

When the minimum wage is tied to inflation, recessions can become especially dangerous and drawn out.

With the minimum wage rising in lockstep with inflation, any attempts to decrease payroll costs in real terms (that is to say: inflation adjusted terms) is futile to the extent that payroll expenses are for minimum wage workers. Worse, people who were previously making above the minimum wage and might have had their jobs saved by inflation can be swept up by an increasingly high minimum wage.

This puts central bankers in a bind. As soon as the minimum wage is indexed to inflation, inflation is no longer a boon to all workers. Suddenly, many workers can find themselves in a “damned if you do, damned if you don’t” situation. Without inflation, they may be too expensive to keep. With it, they may be saved… until the minimum wage comes for them too. If a recession goes on long enough, only high-income workers would be sparred.

In addition, minimum wage (or near-minimum wage) workers who are laid off during a period of higher inflation (an in this scenario, there will be many) will suffer comparatively more, as their savings get exhausted even more quickly.

Navigating these competing needs would be an especially tough challenge for certain central banks like the US Federal Reserve – those banks that have dual mandates to maintain stable prices and full employment. If a significant portion of the US ever indexes its minimum wage to inflation, the Fed will have no good options.

It is perhaps darkly humorous that central banks, which bear an unusually large parcel of the blame for our current slow wage growth, stand to face the greatest challenges from the policies we’re devising to make up for their past shortcomings. Unfortunately, I think a punishment of this sort is rather like cutting off our collective nose to spite our collective face.

There are simple policies we could enact to counter the risks here. Suspending any peg to inflation during years that contain recessions (in Ontario at least, the minimum wage increase due to inflation is calculated annually) would be a promising start. Wage growth after a recession could be ensured with a rebound clause, or better yet, the central bank actually doing its job properly.

I am worried about the political chances (and popularity once enacted) of any such pragmatic policy though. Many people respond to recessions with the belief that the government can make things better by passing the right legislation – forcing the economy back on track by sheer force of ink. This is rarely the case, especially because the legislation that people have historically clamoured for when unemployment is high is the sort that increases wages, not lowers them. This is a disaster when unemployment threatens because of too-high wages. FDR is remembered positively for his policy of increasing wages during the great depression, even though this disastrous decision strangled the recovery in its crib. I don’t expect any higher degree of economic literacy from people today.

To put my fears more plainly, I worry that politicians, faced with waning popularity and a nipping recession, would find allowing the minimum wage to be frozen too much of a political risk. I frankly don’t trust most politicians to follow through with a freeze, even if it’s direly needed.

Minimum wages are one example of a tradeoff we make between broad access and minimum standards. Do we try and make sure everyone who wants a job can have one, or do we make sure people who have jobs aren’t paid too little for their labour, even if that hurts the unemployed? As long as there’s scarcity, we’re going to have to struggle with how we ensure that as many people as possible have their material needs met and that involves tradeoffs like this one.

Minimum wages are just one way we can do this. Wage subsidies or a Universal Basic Income are both being discussed with increasing frequency these days.

But when we’re making these kind of compassionate decisions, we need to look at the risks of whatever systems we choose. Proponents of indexing the minimum wage to inflation haven’t done a good job of understanding the grave risk it poses to the health of our economy and perhaps most of all, to the very people they seek to help. In places like Ontario, where the minimum wage is already indexed to inflation, we’re going to pay for their lack of foresight next time an economic disaster strikes.

Economics, Falsifiable

You Might Want To Blame Central Banks For Poor Wage Growth

The Economist wonders why wage growth isn’t increasing, even as unemployment falls. A naïve reading of supply and demand suggests that it should, so this has become a relatively common talking point in the news, with people of all persuasions scratching their heads. The Economist does it better than most. They at least talk about slowing productivity growth and rising oil prices, instead of blaming everything on workers (for failing to negotiate) or employers (for not suddenly raising wages).

But after reading monetary policy blogs, the current lack of wage growth feels much less confusing to me. Based on this, I’d like to offer one explanation for why wages haven’t been growing. While I may not be an economist, I’ll be doing my best to pass along verbatim the views of serious economic thinkers.

Image courtesy of the St. Louis Federal Reserve Bank. Units are 1982-1984 CPI-adjusted dollars. Isn’t it rad how the US government doesn’t copyright anything it produces?

 

 

When people talk about stagnant wage growth, this is what they mean. Average weekly wages have increased from $335 a week in 1979 to $350/week in 2018 (all values are 1982 CPI-adjusted US dollars). This is a 4.5% increase, representing $780/year more (1982 dollars) in wages over the whole period. This is not a big change.

More recent wage growth also isn’t impressive. At the depth of the recession, weekly wages were $331 [1]. Since then, they’ve increased by $19/week, or 5.7%. However, wages have only increased by $5/week (1.4%) since the previous high in 2009.

This doesn’t really match people’s long run expectations. Between 1948 and 1973, hourly compensation increased by 91.3%.

I don’t have an explanation for what happened to once-high wage growth between 1980 and 2008 (see The Captured Economy for what some economists think might explain it). But when it comes to the current stagnation, one factor I don’t hear enough people talking about is bad policy moves by central bankers.

To understand why the central bank affects wage growth, you have to understand something called “sticky wages“.

Wages are considered “sticky” because it is basically impossible to cut them. If companies face a choice between firing people and cutting wages, they’ll almost always choose to fire people. This is because long practice has taught them that the opposite is untenable.

If you cut everyone’s wages, you’ll face an office full of much less motivated people. Those whose skills are still in demand will quickly jump ship to companies that compensate them more in line with market rates. If you just cut the wages of some of your employees (to protect your best performers), you’ll quickly find an environment of toxic resentment sets in.

This is not even to mention that minimum wage laws make it illegal to cut the wages of many workers.

Normally the economy gets around sticky wages with inflation. This steadily erodes wages (including the minimum wage). During boom times, businesses increase wages above inflation to keep their employees happy (or lose them to other businesses that can pay more and need the labour). During busts, inflation can obviate the need to fire people by decreasing the cost of payroll relative to other inputs.

But what we saw during the last recession was persistently low inflation rates. Throughout the whole the thing, the Federal Reserve Bank kept saying, in effect, “wow, really hard to up inflation; we just can’t manage to do it”.

Look at how inflation hovers just above zero for the whole great recession and associated recovery. It would have been better had it been hovering around 2%.

It’s obviously false that the Fed couldn’t trigger inflation if it wanted to. As a thought experiment, imagine that they had printed enough money to give everyone in the country $1,000,000 and then mailed it out. That would obviously cause inflation. So it is (theoretically) just a manner of scaling that back to the point where we’d only see inflation, not hyper-inflation. Why then did the Fed fail to do something that should be so easy?

According to Scott Sumner, you can’t just look at the traditional instrument the central bank has for managing inflation (the interest rate) to determine if its policies are inflationary or not. If something happens to the monetary supply (e.g. say all banks get spooked and up their reserves dramatically [2]), this changes how effective those tools will be.

After the recession, the Fed held the interest rates low and printed money. But it actually didn’t print enough money given the tightened bank reserves to spur inflation. What looked like easy money (inflationary behaviour) was actually tight money (deflationary behaviour), because there was another event constricting the money supply. If the Fed wanted inflation, it would have had to do much more than is required in normal times. The Federal Reserve never realized this, so it was always confused by why inflation failed to materialize.

This set off the perfect storm that led to the long recovery after the recession. Inflation didn’t drive down wages, so it didn’t make economic sense to hire people (or even keep as many people on staff), so aggregate demand was low, so business was bad, so it didn’t make sense to hire people (or keep them on staff)…

If real wages had properly fallen, then fewer people would have been laid off, business wouldn’t have gotten as bad, and the economy could have started to recover much more quickly (with inflation then cooling down and wage growth occurring). Scott Sumner goes so far to say that the money shock caused by increased cash reserves may have been the cause of the great recession, not the banks failing or the housing bubble.

What does this history have to do with poor wage growth?

Well it turns out that companies have responded to the tight labour market with something other than higher wages: bonuses.

Bonuses are one-time payments that people only expect when times are good. There’s no problem cutting them in recessions.

Switching to bonuses was a calculated move for businesses, because they have lost all faith that the Federal Reserve will do what is necessary (or will know how to do what is necessary) to create the inflation needed to prevent deep recessions. When you know that wages are sticky and you know that inflation won’t save you from them, you have no choice but to pre-emptively limit wages, even when there isn’t a recession. Even when a recession feels fairly far away.

More inflation may feel like the exact opposite of what’s needed to increase wages. But we’re talking about targeted inflation here. If we could trust humans to do the rational thing and bargain for less pay now in exchange for more pay in the future whenever times are tight, then we wouldn’t have this problem and wages probably would have recovered better. But humans are humans, not automatons, so we need to make the best with what we have.

One of the purposes of institutions is to build a framework within which we can make good decisions. From this point of view, the Federal Reserve (and other central banks; the Bank of Japan is arguably far worse) have failed. Institutions failing when confronted with new circumstances isn’t as pithy as “it’s all the fault of those greedy capitalists” or “people need to grow backbones and negotiate for higher wages”, but I think it’s ultimately a more correct explanation for our current period of slow wage growth. This suggests that we’ll only see wage growth recover when the Fed commits to better monetary policy [3], or enough time passes that everyone forgets the great recession.

In either case, I’m not holding my breath.

Footnotes

[1] I’m ignoring the drop in Q2 2014, where wages fell to $330/week, because this was caused by the end of extended unemployment insurance in America. The end of that program made finding work somewhat more important for a variety of people, which led to an uptick in the supply of labour and a corresponding decrease in the market clearing wage. ^

[2] Under a fractional reserve banking system, banks can lend out most of their deposits, with only a fraction kept in reserve to cover any withdrawals customers may want to make. This effectively increases the money supply, because you can have dollars (or yen, or pesos) that are both left in a bank account and invested in the economy. When banks hold onto more of their reserves because of uncertainty, they are essentially shrinking the total money supply. ^

[3] Scott Sumner suggests that we should target nominal GDP instead of inflation. When economic growth slows, we’d automatically get higher inflation, as the central bank pumps out money to meet the growth target. When the market begins to give way to roaring growth and speculative bubbles, the high rate of real growth would cause the central bank to step back, tapping the brakes before the economy overheats. I wonder if limiting inflation on the upswing would also have the advantage of increasing real wages as the economy booms? ^