Economics

Ending Bailouts and Recessions: Why the Left should care about monetary economics

When I write about economics on this blog, it is quite often from the perspective of monetary economics. I’ve certainly made no secret about how important monetary economics is to my thinking, but I also have never clearly laid out the arguments that convinced me of monetarism, let alone explained its central theories. This isn’t by design. I’ve found it frustrating that many of my explanations of monetarism are relegated to disjointed footnotes. There’s almost an introduction to monetarism already on this blog, if you’re willing to piece together thirty footnotes on ten different posts.

It is obviously the case that no one wants to do this. Therefore, I’d like to try something else: a succinct explanation of monetary economics, written as clearly as possible and without any simplifying omissions or obfuscations, but free of (unexplained) jargon.

It is my hope that having recently struggled to shove this material into my own head, I’m well positioned to explain it. I especially hope to explain it to people broadly similar to me: people who are vaguely left-leaning and interested in economics as it pertains to public policy, especially people who believe that public policy should have as its principled aim ensuring a comfortable and dignified standard of living for as many as possible (especially those who have traditionally been underserved or abandoned by the government).

To begin, I should define monetarism. Monetarism is the branch of (macro-)economic thought that holds that the supply of money is a key determinant of recessions, depressions, and growth (in whole, the “business cycle”, the pattern of boom and bust that characterizes all market economies that use money).

Why does money matter?

In general, during both periods of growth and recessions, the supply of money increases. However, there have been several periods of time in America where the supply of money has decreased. Between the years of 1867 and 1963, there were eight such periods. They are: 1873-1879, 1892-1894, 1907-1908, 1920-1921, 1929-1933, 1937-1938, 1948-1949, and 1959-1960.

When I first read those dates, I got chills. Those are the dates of every single serious contraction in the covered years.

Men queueing for free soup during the Great Depression
The Great Depression appears twice! Image courtesy Wikimedia Commons.

Furthermore, while minor recessions aren’t characterized by a decrease in the supply of money, they are characterized by a decrease in the rate of the growth of the money supply. That is to saw, the money supply is still increasing, but by less than it normally does.

Let’s pause for a second and talk about the growth of the money supply. Why does it normally grow?

Under the international gold standard, which existed in modern times under one form or another until President Nixon de facto ended it in 1971, money either existed as precious metal coins (specie), or paper banknotes backed by specie. If you had a dollar in your wallet, you could convert it to a set amount of gold.

As long as gold mining was economically viable (it was in the period covering 1867-1963, which we’re talking about), there was, in general, steady growth in the money supply. Each dollar’s worth of gold pulled out of the ground made it possible to expand the monetary supply by a similar amount, although I should note that not all gold that was mined was used this way (some was used, for example, to make jewelry).

Since the end of the gold standard, governments have made a commitment to keeping the money supply steadily increasing. We commonly refer to this as “printing money”, but that’s a bit of an anachronism. Central banks create money by buying assets (like government debt) using money that did not previously exist. This process is digital [1].

(We call currencies that aren’t backed by precious metals or other commodities “fiat” currencies, because their value exists, at least in part, because of government fiat.)

In both fiat and commodity currency regimes, there is a clear correlation between changes in the growth rate of the money supply and the growth rate of the economy. A decrease in money supply growth leads to a recession. An outright decrease in money supply (i.e. negative growth) leads to a depression. Even within the categories (depression and recession), there’s a correlation. The worse the decline in growth rate, the worse the downturn.

Whenever someone provides an interesting correlation, it is important to ask about causation. It does not necessarily need to be the case that a decrease in money supply is what is causing recessions. It could instead be that recessions cause the decrease in the rate of money growth, or that money supply is a lagging indicator of recessions (as unemployment is), rather than a leading one [2].

There are four reasons to suspect that money is in fact the causal factor in business cycles.

First, there is the simple fact that history suggests a causal relationship. We do not see any history of central banks (which remember, help control the money supply) reacting to economic recession with plans to cut the supply of money. On the other hand, we have seen recessions which were started when central banks have deliberately decreased the growth of the money supply, as the Federal Reserve Chairman Paul Volcker did in 1980.

Second, it is possible to do correlational analyses to determine if it is more probable that something is a leading or lagging indicator. Anna Schwartz and Milton Friedman did just such an analysis on data from US recessions and depressions between 1867 and 1963 and found correlation only with money as a leading indicator.

Third, money is much better positioned to explain recessions and depressions than the alternative (Keynesian) theory which holds that recessions occur due to a fall in investment. The correlation between the amount of investment and the amount of economic growth in America (again, between 1867 and 1963) disappears when you control for changes in the money supply. The correlation between money and growth remains, even when controlling for investment.

Fourth, we do not need to be a priori skeptical of money as a key determinant of the business cycle. Money is clearly linked to the economy; it literally permeates it. The business cycle of growth followed by recession is observed only in economies that use money [3]. While it would make sense to be inherently skeptical of a theory that holds that recessions occur when not enough sewing needles are produced, we need to be much less reflexively skeptical of money. Claiming money causes the business cycle isn’t like claiming Nicholas Cage movies cause accidental drowning.

The correlation in this graph is obviously false because there’s no plausible mechanism connecting the two! This graph would be much more plausible if “Nicholas Cage films” was replaced with “New pool installations”. While our hypothetical graph of fatalities vs. installations wouldn’t be conclusive, it would be highly suggestive, in a way this graph just isn’t. Graph concept courtesy of Tyler Vigen, who is kind enough to make all of his spurious correlation graphs free of Copyright.

These arguments are necessarily summaries; this blog post isn’t the best place to put all of the graphs and regression analyses that Schwartz and Friedman did when first formulating their theory of monetary economics. I’ve read through the analysis several times and I believe it to be sound. If you wish to pore over regressions yourself, I recommend the paper Money and Business Cycle (1963).

If you can accept that the supply of money plays a key role in the business cycle, you’ll probably find yourself in possession of several questions, not the least of which will be “how?”. That’s a good question! But before I can explain “how”, I first need to define money, explain how banking works, and delve into the role and abilities of the central bank. It will be worth it, I promise.

What is money?

At first blush, this is a silly question. Money is one of those things we know when we see. It’s the cash in our wallets and the accounts at our banks. Except, it’s not quite that.

Money isn’t a binary category. Things can have varying amounts of “moneyness”, which is to say, can be varyingly good at accomplishing the three functions of money. These three functions are: a store of value (something that can be exchanged for goods in the future), a unit of account (something that you can use to keep track of how many goods you could buy), and a medium of exchange (something that you can give to someone in exchange for goods).

While bank deposits and cash are obviously money, there are also a variety of financial products that we tend to consider money even though they have less moneyness than cash. For example, robo-investment accounts (of the sort that my generation uses) often given the illusion of containing cash by being denominated in dollars and allowing withdrawals [4]. What makes them have less moneyness than cash is only apparent when you look under the hood and realize they contain a mixture of stocks and loans.

In a monetary context, when we say “money”, we aren’t referring to investment accounts or any other instrument that pretends to be cash [5]. Instead, we’re referring to the “money supply”, which is made up of instruments with very high moneyness and is determined by three factors:

  1. The monetary base. This is the money that the central bank issues. We see it as cash, as well as the reserves that regular banks choose to hold.
  2. The amount of reserves banks keep against deposits. Later this will show up as the deposit-reserve ratio, which is calculated by dividing total deposits by the reserves kept on hand by banks.
  3. How much of its currency the public chooses to deposit at banks. This will surface later as the deposit-currency ratio. This is calculated by dividing the value of all deposit accounts at banks by the total amount of currency in circulation.

What are reserves?

When you give your money to a bank, it doesn’t hold all of it in a vault somewhere. Vaults are expensive, as are guards, tellers, and account software. If banks held onto all of your cash for you, you’d have to pay them quite a lot of money for the service. Many of us would decide it’s not worth the bother and keep our cash under the proverbial mattress.

Banks realized this a long time ago. They responded like any good business – by finding a way to cut costs for the consumer.

Banks were able to cut costs by realizing that it is very rare for everyone to want all of their money back at once. If banks didn’t need to keep all of the deposited cash (or, in the olden days, gold and silver specie) on hand, they could lend some of it out and use the interest it earned to subsidize the cost of running the bank.

This led to the birth of the fractional reserve system, so named because bank reserves are a fraction of the money deposited in banks [6].

Once you have a fractional reserve system, a funny thing happens with the money supply: it is no longer made up solely by money created by the central bank. When commercial banks lend out money that people have deposited, they essentially create money. This is how the money supply ends up depending on the deposit-reserve ratio; this ratio describes how much money banks are creating.

When banks decide to lend out more of their reserves, the deposit-reserve ratio increases and the money supply increases. When banks instead decide to lend out less and sit on their cash, the deposit-reserve ratio decreases and the money supply decreases.

But it isn’t just the banks that get a vote in the money supply under a fractional reserve system. Each of us with a bank account also gets a vote. If we trust banks or if we’re enticed by a high interest rate, we hold less cash and put more money in our bank accounts (which causes the deposit-currency ratio – and therefore the money supply – to increase). If we’re instead worried about the stability of banks or if bank accounts aren’t paying very appealing interest rates, we’ll tend to hold onto our cash (decreasing the deposit-currency ratio and the total supply of money).

Holding the deposit-reserve ratio constant, the money supply increases when the deposit-currency ratio increases and decreases when the deposit-currency ratio decreases. This is because every dollar in the bank becomes, via the magic of fractional reserve banking, more than a single dollar in the money supply. Your deposit remains available to you, but most of it is also lend out to someone else.

While we cannot in practice hold any ratio constant, there do exist real constraints on the deposit-reserve ratio. In the US, there are laws that require banks above a certain size to keep liquid reserves equal to at least 10% of their deposits. Many other countries lack reserve requirements per se, but do require banks to limit how leveraged they become, which acts as a de facto limit on their deposit-reserve ratio [7].

It isn’t just the government that provides restraints. Banks may have internal policies that require them to have lower (safer) deposit-reserve ratios the government demands.

Governments and bank risk management departs set limits on the deposit-reserve ratio in an attempt to limit bank failures, which become more likely the higher the deposit-reserve ratio gets. Banks don’t really sit on all of their reserves, or even stuff it in vaults. Instead, they normally use it to buy assets that they and the government agree are safe. Often this takes the form of government bonds, but sometimes other assets are considered suitable. Many of the mortgage backed securities that exploded during the financial crisis were considered suitably safe, which was a major failure of the ratings agencies.

If assets banks have bought to act as their reserves lose value, they can find their deposit-reserve ratio higher than they want it to be, which often results in a sudden decline in loan activity (and therefore a decline in the growth rate of the money supply) as they try to return their financials to normal [8]. Bank failures can occur if deposit-reserve ratios get so far from normal that banks cannot afford to meet normal withdrawal requests.

If people and banks have so much control over the money supply, what do central banks do?

What central banks do depends on their mandate; what the government has told them to do. The US Federal Reserve Bank has a dual mandate: to maintain a stable price level (here defined as inflation of approximately 2%) and to ensure full employment (defined as an unemployment rate of around 4.5% [9]). The Fed is actually a bit of an aberration here. Many central banks (like Canada’s) have a single mandate: “to keep inflation low, predictable, and stable”.

The Federal Reserve building in Washington
All central banks also have an unofficial mandate: have really cool looking headquarters. Image courtesy of Wikimedia Commons.

Currently, central banks achieve their mandate by manipulating interest rates. They do this with a “target rate” and “open market operations”. The target rate is the thing you hear about on TV and in the news. It’s where the central bank would like interest rates to be (here, interest rates really means “the rate at which banks lend each other money”; consumers can generally expect to make less interest on their savings and pay more when they take out loans [10]).

Note that I’ve said the target rate is where the central bank would “like” interest rates to be. It can’t just call up every bank and declare the new interest rate by fiat. Instead, it engages in those “open market operations” that I mentioned. There are two types of open market operations.

When the interest rate is above target, the central bank lends money to banks at below-market interest rates (to increase the supply of money and encourage interest rates to become lower). When the interest rate is below target, the central bank will begin selling assets to banks (to give banks something else to do with their money and thereby make them demand more interest from each other when loaning).

Open market operations are normally fairly successful at keeping the interest rate reasonably close to the target rate.

Unfortunately, the target rate is only moderately effective at achieving monetary policy goals.

Remember, the correlation we identified in the first section is for the total supply of money, not for the interest rate. There’s some correlation between the two (lower interest rates can mean a fast monetary growth rate), but it isn’t exact.

When you hear people on TV say that “low interest rates mean easy money” (“easy money” means variously “high growth in the money supply” or “growth in the money supply likely to cause above-target inflation”) or “high interest rates mean tight money” (a shrinking money supply; below target inflation), you are hearing people who don’t entirely understand what they’re talking about.

The key piece of information reporters often lack is how much demand banks have for money. If banks don’t really want much more money (perhaps because the economy is tanking and there’s nothing to do with money that will justify loan repayments) then a low interest rate can still result in the money supply barely growing. It may be that the central bank target rate is quite low by historical standards (say 1%) but still not low enough to expand the money supply via loans to banks.

Put another way, while a 1% interest rate is always easier than a 2% interest rate, there’s often nothing to tell a priori if it represents easy money, which is to say, growth in the money stock. A 1% target rate can be contractionary (shrink the money stock) if banks won’t take out loans when charged it.

Conversely, a 10% interest rate could conceivably represent easy money if banks are still taking out lots of loans at that rate. Take a case where there’s some asset currently returning 20% every year. Under those circumstances, 10% interest payments are a steal and the money supply would continue to increase. It’s certainly tighter money than a 2% interest rate, but it’s not always tight money.

If you want to see if the target interest rate is inflationary or deflationary, you should look at the market’s expectations for inflation. If the market is predicting higher than target inflation, money is easy. If it’s predicting below target inflation, money is tight.

Central banks often collect statistics so that they can judge the effectiveness of their policy actions. If inflation is too low, they’ll lower their target rate. Too high, and they’ll raise it. Over time, if the economy is stable, central banks will correct any short run problems introduced by interest rate targeting and eventually zero in on their inflation target. Unfortunately, this leaves the door open to painful short-term failures.

How do central banks fail in the short run?

First, I want to make it clear that short-term failures are bad. While long-term price stability is definitely a good thing, short-term fluctuations in the money supply can lead to recessions (remember our solid correlation between shrinking money supply and recessions). Even relatively minor short-term failures can have consequences for hundreds of thousands or millions of people whenever recessions lead to job losses.

Central banks most commonly fail in the short-run because of some sort of unexpected shock. Most commonly, shocks that lead to long recessions originate in the financial sector. The 2001 dot-com crash, for example, didn’t technically lead to a recession in the United States, despite the huge stock market losses [11].

This graph, from Wikimedia Commons, shows the scale of the losses in the NASDAQ Composite during the dot-com crash.

 

Shocks to the financial sector are unusually likely to cause recessions because of the key role that the financial sector plays in determining the monetary supply (via the deposit-reserve ratio we discussed above), as well as the key role that confidence in the financial sector plays (via the deposit-currency ratio).

When financial institutions run into trouble, they have to scramble for liquidity – for cash that they can have on hand in case people wish to withdraw their money [12] – which means they make fewer loans. Suddenly, the money multiplier that banks supply shrinks and the amount of money in the economy decreases.

Things can get even worse when the public loses faith in the banking system. If you suspect that a bank might fail, you will want to get your money out while you still can. Unfortunately, if everyone comes to believe this, then the bank will fail [13]. By design, it doesn’t have enough cash on hand to pay everyone back [14]. When this happens, it is called a “run” on the banks or a “bank run” and they’re thankfully becoming more and more rare. Many developed countries have ended them entirely with a program of deposit insurance. That’s the stickers you see on the door of your bank that promises your deposits will be returned to you, even if the bank fails [15].

This is one of the few images on my blog that isn't under some sort of Creative Commons license. I'm using it here under fair use, for the purpose of comment on the institution of deposit insurance. While we're here and talking about this, I think the prominent display requirement, while now not very useful, probably was once very important. When deposit insurance was new, you did really want people to see that their banks had insurance and feel secure. It's part of how deposit insurance makes itself less necessary. The very fact it exists prevents most of the bank runs it would pay out for.
Here’s what the stickers look like in Canada. According to the CDIC website (which is where I got this image), they must be prominently displayed.

It’s good that we’ve stopped bank runs, because they’re incredibly deflationary (they are very good at shrinking the money supply). This is due to the deposit-currency ratio being a key determinant of the total money supply. When people stop using banks, the deposit-currency ratio falls and the money supply decreases.

Since bank failures can occur quite suddenly and can spread throughout the financial system quickly, a financial crisis can cause a deflation that is too rapid for the central bank to react too. This is especially true because modern central banks have a general tendency to fear inflation much more than many monetarists believe they should [16]. This is really unfortunate! A slow response to a decrease in the growth of the money supply (whether caused by a financial crisis or something else) can easily turn into a recession or depression, with all the attendant misery.

Okay, but can you explain how this happens?

Many individuals and companies like to keep a certain amount of money on hand, if at all possible. When they have less money than this, they economize, until they feel comfortable with the amount of money they have. When they have more money, the tend to invest it or spend it.

When the money supply increases, either via by the central bank buying bonds, the government reducing reserve requirements, or people deciding to hold more of their money at banks, there are suddenly larger supplies of money at banks then they would like to hold on to.

Banks then spend this money (or invest it, which is essentially giving it to someone else to spend). The people banks give the money to immediately face the same problem; they have more money than they plan on holding. What follows is a game of hot potato, as everyone in the economy tries to keep their account balances where they want them (by spending money).

If there is free capacity in the economy (e.g. factories are idle, people are unemployed, etc.), then this free capacity eventually absorbs the money (that is to say: people who had less money on hand then they desired are quite happy to grab and hold onto the extra money). If there is very little free capacity in the economy however (i.e. unemployment is low, production high), then this money really cannot be spent to produce anything extra. Instead, we have more money, chasing the same amount of goods and services. The end result of that is prices increasing – what we call inflation – or, just as correctly, money becoming worth less.

Once prices rise, people realize they need to hold onto slightly more money and a new equilibrium is reached.

After all, the money that people are holding onto is really acting as a unit of account. It denotes how many days (or weeks, or months) of consumption they want to have easy access to. Inflation changes how much money you need to hold onto to keep the same number of days (weeks, months) of production [17].

Now, let’s run this whole thing in reverse. Instead of increasing the supply of money, the money supply is decreasing (or failing to grow at the expected rate). Maybe there were new reserve requirements, or a financial crash, or the central bank misjudged the amount of money it needed to create [18]. Regardless of how it happens, someone who was expecting to get some money isn’t going to get it.

This person (bank, corporation) will find themself having less cash on hand then they hoped for and will cut back on their spending. This spending was going to someone else who was hoping for it. And suddenly the whole economy is trying to collectively spend less money, which it can’t do right away.

Instead, money becomes relatively more valuable as everyone scrambles for it. This looks like prices going down.

The price of labour (wages), might, in theory be expected to go down, but in practice it doesn’t. It’s very emotionally taxing to try and convince many employees to accept pay cuts (in addition to being bad for morale), so firms tend to prefer pay freezes, cutting back on contract labour, switching some workers to part-time, and firings to pay cuts [19].

Decreased growth in the rate of money affects more than just workers. Factories close or sit idle. Economic capacity diminishes. Ultimately, the whole economy can spend less, if some of the economy is gone.

All of these taken together are the hallmarks of recession. We see job losses, idle capacity, and closures. And we can directly point at failures of central bank policy as the culprit.

Can changes in the growth rate of money affect anything else?

There are three interesting relationships between inflation and employment.

First, it seems that higher than expected inflation leads to increased employment. Friedman and Schwartz speculated that this occurs because corporations are better positioned to see inflation than workers. When they see evidence of inflation, they can quickly hire workers at previously normal salaries. These salaries represent something of a discount when there’s unexpected inflation, so it’s quite a steal for the companies.

Unfortunately, this effect doesn’t persist. As soon as everyone understands that inflation has increased, they bake this into their expectations of salaries and raises. Labour stops being artificially cheap, and companies may end up letting go of some of the newly hired workers.

Second, it seems that increasing money supply is correlated with increasing real wages, that is, wages that are already adjusted for inflation. While it makes sense that inflation will lead to an increase in nominal wages (that is, inflation leads to higher salaries, even if those salaries cannot buy anything extra), it’s a bit odder that it leads to higher real wages. I haven’t yet seen an explanation for why this is true, but it’s an interesting tidbit and one I hope to understand better in the future [20].

Finally, inflation can play an important role in avoiding job losses. Not all economic downturns are caused by central banks. Sometimes, the shock is external (like an earthquake, commodity crash, or a trade embargo). In these cases, certain sectors of the economy may be facing losses and may respond with firing (as we saw above, wage cuts are rarely considered a tenable option). However, inflation can act as an implicit wage cut and stop job losses long enough for the economy to adjust.

If salaries are kept constant while inflation continues apace (or even increases), they become relatively less expensive, all without the emotional toll that wage cuts take. This can protect jobs and engineer a “soft landing”, where a shock doesn’t lead to any large-scale job losses.

Obviously, this has to be temporary, so as not to erode the purchasing power of workers too much, but most shocks are temporary, so this is not a difficult constraint.

Okay, what does this say about policy?

There are three main policy takeaways from this post.

First, interest rates are a bad policy indicator. It’s hard for people to break their association between easy money and low interest rates, which means monetary policy is likely to end up too tight. The best analogy I’ve heard for interest rates are a steering wheel that sometimes points a bus left when turned left and sometimes points the bus left when turned right. If you wouldn’t get in a bus driven like that, you shouldn’t be thrilled about being in an economy that’s being driven in the exact same way.

Second, a stable monetary policy is very useful. Note that stable monetary policy implies neither stable interest rates, nor stable inflation. Rather, a stable monetary policy means that everyone can have confidence that the central bank will act in predictable and productive ways. Stable monetary policy smooths out the peaks and valleys of the business cycle. It stops highs from becoming too speculative and keeps lows from leading to terrible grinding unemployment. It also lets unions and workers bargain for long-term wage increases and allows companies to grant them without being scared they’ll become unsustainable due to below-target inflation.

Third, expectations are a powerful tool. If banks believe that the central bank will print lots of money (and buy lots of assets) during a crisis, they won’t have to stop making loans, or increase their reserves. Sometimes, the mere expectation of a forceful government intervention prevents any need for the intervention (like with deposit insurance; it rarely pays out because its existence has drastically reduced the need for it). Had the Federal Reserve reacted more aggressively to the financial crisis, it may have been possible to avoid the massive bailout to financial companies.

I know that “the money supply” will never be a progressive priority. But I think it’s a thing that progressives should care about. Billionaires may not like bad monetary policy, but they aren’t the ones who feel the brunt of its failure. Those are the workers who are laid off, or the pensioners who lose their savings.

I hope I’ve made the case that in order to care about them, we need to care about how money works.

Further Reading and Sources

I drew heavily on Money in Historical Perspective, by Anna J. Schwartz when writing this blog post. The papers Money and Business Cycles (1963, with Milton Friedman), Why Money Matters (1969), The Importance of Stable Money: Theory and Evidence (1983, with Michael D. Bordo), and Real and Pseudo-Financial Crises (1986) were particularly informative.

Scott Sumner’s blog The Money Illusion is an excellent resource for current monetarist thought, while J. P. Koning’s blog Moneyness provides many excellent historical anecdotes about money.

Footnotes

Like all of my posts about economics, this one contains way too many footnotes. These footnotes are mainly clarifying anecdotes, definitions, and comments. I’ve relegated them here because they aren’t necessary for understanding this post, but I think they still can be useful.

[1] Separately, banks create currency for day to day use based on the public’s demand for currency. The more you go to the ATM, the more bills the central bank creates for you to withdraw. Banks return currency to the central bank every so often (either to buy assets the central bank holds, or to replace it with its digital equivalents). If fewer people want cash and ATMs are overprovisioned, banks will deposit more cash with the central bank than they, as a whole, withdraw.

Therefore, while the central bank controls the growth of the money supply, the public collectively determines the growth in the cash supply. While in general the cash supply continues to grow, this may change as more and more commerce becomes digital. Sweden has already reached peak cash and is now seeing their total cash supply decline (without a corresponding decrease in money supply). ^

[2] That would be to say, that money decreases at or near the peak of a business cycles because of some delayed effect from the previous business cycle, rather than as an independent variable that will affect the current business cycle. ^

[3] Furthermore, it seems that depressions can be transmitted among countries with a common currency source (e.g. the gold standard, the current international dollar based payment regime), but are less likely transmitted outside of their home regime. China, for example, did not see a contraction during the first part of the Great Depression (it used silver as its monetary base, rather than gold) and only saw a contraction once the US began buying up silver, effectively shrinking the Chinese monetary supply. ^

[4] Although crucially, they don’t allow instant withdrawals, because they require some time to sell assets. ^

[5] We aren’t losing anything by making this distinction. The growth of products like credit cards has not affected the monetary transmission mechanism, see Has the Growth of Money Substitutes Hindered Monetary Policy? By Anna J Schwartz and Philip Cagan, 1975. ^

[6] Financial terms referring to banks are often oddly inverted. Customer deposits with banks are termed liabilities (as the bank is liable to return them), while loans the bank has made are assets (as someone else will hopefully pay the bank back for them). If you want to see which of your friends have been reading about economics, say “I think a lot of the loans that bank made have become liabilities”. The ones who visibly twitch or look confused are the ones studying economics. ^

[7] In addition to regulation, government policy can affect the deposit-reserve ratio. In the aftermath of the 2007-2008 financial crisis, the Federal Reserve began, for the first time, to pay interest on reserves (both required reserves and excess reserves). This move led to a huge increase in excess reserves (to more than 16x required reserves by 2011; this happened because banks became very risk averse during the crisis and getting interest on their excess reserves became a risk-free way to make money) and a precipitous drop in the deposit-reserve ratio, which, as we discussed above, means a precipitous drop in the supply of money (which tends to lead to recessions and depressions). Scott Sumner calls this one of the greatest ever failures of monetary policy. ^

[8] In addition to cutting back on loans, this often results in banks selling assets, to try and increase the amount of cash they have on hand. If multiple banks run into trouble at once and they sell similar assets at the same time the value of the assets can drop precipitously, forcing other banks to sell and raising the possibility of multiple bank failures. This is called contagion, a word that came up a lot in the aftermath of the 2007-2008 financial crisis. ^

[9] “Full employment” is a term economists use to mean “the unemployment rate during neutral macroeconomic conditions”, which is simply the unemployment rate outside of a recession or a speculative bubble. It’s my opinion that full employment is heavily dependent on the political and culture features of a country. Canada and America, for example, have rather different full employment rates (Canada’s allows more unemployment). I’d argue this is because Canada has more of a social safety net, which would imply that some people working in the US at “full employment” really would prefer not to work, but feel they have no other choice. This seems to fit well with empirical data. For example, when the extended unemployment benefits program ended in 2015, we simultaneously saw a drop in the unemployment rate and a decrease in wages. This is consistent with unemployed people suddenly scrambling for jobs at rather worse terms than they’d previously hoped for. ^

[10] Narrow exceptions apply and normally represent some sort of promotion or implicit sale. For example, short-term car loans on last year’s models will often be discounted below the target rate. It is generally a good idea to take a short-term loan at a below-target interest rate rather than pay a lump. This is not financial advice. ^

[11] Technically, for an event to qualify as a recession, there must be two quarters of successive contraction in national GDP. This never occurred during (or after) the Dot-com crash. Interestingly, the initial contraction was immediately preceded by the Federal Reserve signalling its intent to tighten monetary policy so as to rein in speculation, which it did by raising the interest rate target three times in quick succession. When markets crashed, it quickly reversed course, which may have played a role in averting a longer recession. ^

[12] This is another way of saying either “they try and return a deposit-reserve ratio that has become too high to normal” or “they try and shrink their deposit-reserve ratio”. In either case, the money supply is going to shrink. ^

[13] Banks, as Matt Levine likes to say are “a magical place that transforms risky illiquid long-term loans into safe immediately accessible deposits.” He goes on to point out that “like most magic, this requires a certain suspension of disbelief”. This is pretty socially useful; we want people to trust their bank accounts, but we also want loans for things like houses and factories and college to exist. Most of the time the magic works and everything is fine. But if people stop believing in the magic, it turns out that the guy behind the curtain is a bunch of loans that you can’t call due right away. If you try to, the bank fails. ^

[14] Remember, this is generally a good thing as it makes bank services much more affordable. If banks held onto all their reserves, banking services would be very expensive and many more disadvantaged people would be unbanked. ^

[15] Before insurance, only the first people to get to the bank would get their money back. This meant that you had a strong incentive to pull your money out at the very first sign of trouble. Otherwise stable and well-run banks could be undone by a rumour, as everyone panicked and flocked to the withdrawal counter. Deposit insurance changes the game; now no one has to rush to be first, which means no one needs to withdraw at all. ^

[16] Runaway inflation is bad! But a decrease in the money supply, or a decrease in the growth rate of the money supply is bad as well. A very irresponsible program of monetary growth could trigger double digit inflation. Failure to respond promptly to a decrease in the growth rate of money will cause a recession. Unfortunately, central banks aren’t blamed for recessions (by the government or the general populace) but are blamed for inflation, so they tend to act to minimize their chance of being blamed, instead of acting to maximize social good. ^

[17] Now in real life (as opposed to this simplified model), people probably don’t immediately spend or invest absolutely every extra dollar they get. They may expect to spend some extra in the near future and want to hold it in cash, or they may want to build up more than of a cushion.

This would be an example of an inelastic relationship, where a change in one variable (money supply) leads to a less than proportional change in another (spending/investment).

Still, the more money that is dumped into the economy, the closer we get to the idealized model. If you win $100 in a lottery, you may just leave it in your bank account. But if you win $1,000,000 you’re going to be spending some of it and investing a lot of the rest. ^

[18] Remember, it is possible for the central bank to increase interest rates (create less money) without changing the monetary growth rate. If banks are creating a lot of money and the economy is already at capacity, the central bank can sometimes safely cut back on the amount of money it’s creating while still allowing adequate money to be created by banks. This is why central banks often raise interest rates during booms. It can be necessary to keep inflation from rising. ^

[19] I am not the first to wonder if co-ops might be more “recession-proof” than conventional firms. Since co-ops generally operate via profit-sharing, rather than set wages, they may exhibit less downwards nominal wage rigidity (the economic term for people’s aversion to pay cuts), which means they might weather recessions with wage cuts, rather than outright job losses. I haven’t been able to find any studies on this subject, but I’d be very interested to see if they exist. ^

[20] There is a strain of leftist thought that views Paul Volcker reining in inflation as much worse for workers than any policy of Reagan’s. I’m trying to find a better explanation of this position somewhere and plan to write about it once I do. ^

Model, Philosophy

Against Novelty Culture

So, there’s this thing that happens in certain intellectual communities, like (to give a totally random example) social psychology. This thing is that novel takes are rewarded. New insights are rewarded. Figuring out things that no one has before is rewarded. The high-status people in such a community are the ones who come up with and disseminate many new insights.

On the face of it, this is good! New insights are how we get penicillin and flight and Pad Thai burritos. But there’s one itty bitty little problem with building a culture around it.

Good (and correct!) new ideas are a finite resource.

This isn’t news. Back in 2005, John Ioannidis laid out the case for “most published research findings” being false. It turns out that when you have a small chance of coming up with a correct idea even using statistical tests for to find false positives can break down.

A quick example. There are approximately 25,000 genes in the human genome. Imagine you are searching for genes that increase the risk of schizophrenia (chosen for this example because it is a complex condition believed to be linked to many genes). If there are 100 genes involved in schizophrenia, the odds of any given gene chosen at random being involved are 1 in 250. You, the investigating scientist, decide that you want about an 80% chance of finding some genes that are linked (this is called study power and 80% is a common value) You run a bunch of tests, analyze a bunch of DNA, and think you have a candidate. This gene has been “proven” to be associated with schizophrenia at a p=0.05 confidence level.

(A p-value is the possibility of observing an event at least as extreme as the observed one, if the null hypothesis is true. This means that if the gene isn’t associated with schizophrenia, there is only a 1 in 20 chance – 5% – we’d see a result as extreme or more extreme than the one we observed.)

At the start, we had a 1 in 250 chance of finding a gene. Now that we have a gene, we think there’s a 19 in 20 chance that it’s actually partially responsible for schizophrenia (technically, if we looked at multiple candidates, we should do something slightly different here, but many scientists still don’t, making this still a valid example). Which probability to we trust?

There’s actually an equation to figure it out. It’s called Bayes Rule and statisticians and scientists use it to update probabilities in response to new information. It goes like this:

(You can sing this to the tune of Hallelujah; take P of A when given B / times P of A a priori / divide the whole thing by B’s expectation / new evidence you may soon find / but you will not be in a bind / for you can add it to your calculation.)

In plain language, it means that probability of something being true after an observation (P(A|B)) is equal to the probability of it being true absent any observations (P(A), 1 in 250 here), times the probability of the observation happening if it is true (P(B|A), 0.8 here), divided by the baseline probability of the observation (P(B), 1 in 20 here).

With these numbers from our example, we can see that the probability of a gene actually being associated with schizophrenia when it has a confidence level of 0.05 is… 6.4%.

I took this long detour to illustrate a very important point: one of the strongest determinants of how likely something is to actually be true is the base chance it has of being true. If we expected 1000 genes to be associated with schizophrenia, then the base chance would be 1 in 25, and the probability our gene actually plays a role would jump up to 64%.

To have ten times the chance of getting a study right, you can be 10 times more selective (which probably requires much more than ten times the effort)… or you can investigate something ten times as likely to actually occur. Base rates can be more powerful than statistics, more powerful than arguments, and more powerful than common sense.

This suggests that any community that bases status around producing novel insights will mostly become a community based around producing novel-seeming (but false!) insights once it exhausts all of the available true (and easily attainable) insights it could discover. There isn’t a harsh dividing line, just a gradual trend towards plausible nonsense as the underlying vein of truth is mined out, but the studies and blog posts continue.

Except the reality is probably even worse, because any competition for status in such a community (tenure, page views) will become an iterative process that rewards those best able to come up with plausible sounding wrappers on unfortunately false information.

When this happens, we have people publishing studies with terrible analyses but highly sharable titles (anyone remember the himmicanes paper?), with the people at the top calling anyone who questions their shoddy research “methodological terrorists“.

I know I have at least one friend who is rolling their eyes right now, because I always make fun of the reproducibility crisis in psychology.

But I’m just using that because it’s a convenient example. What I’m really worried about is the Effective Altruism community.

(Effective Altruism is a movement that attempts to maximize the good that charitable donations can do by encouraging donation to the charities that have the highest positive impact per dollar spent. One list of highly effective charities can be found on GiveWell; Givewell has demonstrated a noted trend away from novelty such that I believe this post does not apply to them.)

We are a group of people with countless forums and blogs, as well as several organizations devoted to analyzing the evidence around charity effectiveness. We have conventional organizations, like GiveWell, coexisting with less conventional alternatives, like Wild-Animal Suffering Research.

All of these organizations need to justify their existence somehow. All of these blogs need to get shares and upvotes from someone.

If you believe (like I do) that the number of good charity recommendations might be quite small, then it follows that a large intellectual ecosystem will quickly exhaust these possibilities and begin finding plausible sounding alternatives.

I find it hard to believe that this isn’t already happening. We have people claiming that giving your friends cash or buying pizza for community events is the most effective charity. We have discussions of whether there is suffering in the fundamental particles of physics.

Effective Altruism is as much a philosophy movement as an empirical one. It isn’t always the case that we’ll be using P-values and statistics in our assessment. Sometimes, arguments are purely moral (like arguments about how much weight we should give to insect suffering). But both types of arguments can eventually drift into plausible sounding nonsense if we exhaust all of the real content.

There is no reason to expect that we should be able to tell when this happens. Certainly, experimental psychology wasn’t able to until several years after much-hyped studies more-or-less stopped replicating, despite a population that many people would have previously described as full of serious-minded empiricists. Many psychology researchers still won’t admit that much of the past work needs to be revisited and potentially binned.

This is a problem of incentives, but I don’t know how to make the incentives any better. As a blogger (albeit one who largely summarizes and connects ideas first broached by others), I can tell you that many of the people who blog do it because they can’t not write. There’s always going to be people competing to get their ideas heard and the people who most consistently provide satisfying insights will most often end up with more views.

Therefore, I suggest caution. We do not know how many true insights we should expect, so we cannot tell how likely to be true anything that feels insightful actually is. Against this, the best defense is highly developed scepticism. Always remember to ask for implications of new insights and to determine what information would falsify them. Always assume new insights have a low chance of being true. Notice when there seems to be a pressure to produce novel insights long after the low hanging fruit is gone and be wary of anyone in tat ecosystem.

We might not be able to change novelty culture, but we can do our best to guard against it.

[Special thanks to Cody Wild for coming up with most of the lyrics to Bayesian Hallelujah.]

Advice, Model

Context Windows

When you’re noticing that you’re talking past someone, what does it look like? Do you feel like they’re ignoring all the implications of the topic at hand (“yes, I know the invasion of Iraq is causing a lot of pain, but I think the important question is, ‘did they have WMDs?'”)? Or do you feel like they’re avoiding talking about the object-level point in favour of other considerations (“factory farmed animals might suffer, but before we can consider whether that’s justified or not, shouldn’t we decide whether we have any obligation to maximize the number of living creatures?”)?

I’m beginning to suspect that many tense disagreements and confused, fruitless conversations are caused by differences in how people conceive of and process the truth. More, I think I have a model that explains why some people can productively disagree with anyone and everyone, while others get frustrated very easily with even their closest friends.

The basics of this model come from a piece that Jacob Falkovich wrote for Quillette. He uses two categories, “contextualizers” and “decouplers”, to analyze an incredibly unproductive debate (about race and IQ) between Vox’s Ezra Klein and Dr. Sam Harris.

Klein is the contextualizer, a worldview that comes naturally to a political journalist. Contextualizers see ideas as embedded in a context. Questions of “who does this effect?”, “how is this rooted in society?”, and “what are the (group) identities of people pushing this idea?” are the bread and butter of contextualizers. One of the first things Klein says in his debate with Harris is:

Here is my view: I think you have a deep empathy for Charles Murray’s side of this conversation, because you see yourself in it [because you also feel attacked by “politically correct” criticism]. I don’t think you have as deep an empathy for the other side of this conversation. For the people being told once again that they are genetically and environmentally and at any rate immutably less intelligent and that our social policy should reflect that. I think part of the absence of that empathy is it doesn’t threaten you. I don’t think you see a threat to you in that, in the way you see a threat to you in what’s happened to Murray. In some cases, I’m not even quite sure you heard what Murray was saying on social policy either in The Bell Curve and a lot of his later work, or on the podcast. I think that led to a blind spot, and this is worth discussing.

Klein is highlighting what he thinks is the context that probably informs Harris’s views. He’s suggesting that Harris believes Charles Murray’s points about race and IQ because they have a common enemy. He’s aware of the human tendency to like ideas that come from people we feel close to (myside bias) – or that put a stick in the eye of people we don’t like.

There are other characteristics of contextualizers. They often think thought experiments are pointless, given that they try and strip away all the complex ways that society affects our morality and our circumstances. When they make mistakes, it is often because they fall victim to the “ought-is” fallacy; they assume that truths with bad outcomes are not truths at all.

Harris, on the other hand, is a decoupler. Decoupling involves separating ideas from context, from personal experience, from consequences, from anything but questions of truth or falsehood and using this skill to consider them in the abstract. Decoupling is necessary for science because it’s impossible to accurately check a theory when you hope it to be true. Harris’s response to Klein’s opening salvo is:

I think your argument is, even where it pretends to be factual, or wherever you think it is factual, it is highly biased by political considerations. These are political considerations that I share. The fact that you think I don’t have empathy for people who suffer just the starkest inequalities of wealth and politics and luck is just, it’s telling and it’s untrue. I think it’s even untrue of Murray. The fact that you’re conflating the social policies he endorses — like the fact that he’s against affirmative action and he’s for universal basic income, I know you don’t happen agree with those policies, you think that would be disastrous — there’s a good-faith argument to be had on both sides of that conversation. That conversation is quite distinct from the science and even that conversation about social policy can be had without any allegation that a person is racist, or that a person lacks empathy for people who are at the bottom of society. That’s one distinction I want to make.

Harris is pointing out that questions of whether his beliefs will have good or bad consequences or who they’ll hurt have nothing to do with the question of if they are true. He might care deeply about the answers of those questions, but he believes that it’s a dangerous mistake to let that guide how you evaluate an idea. Scientists who fail to do that tend to get caught up in the replication crisis.

When decouplers err, it is often because of the is-ought fallacy. They fail to consider how empirical truths can have real world consequences and fail to consider how labels that might be true in the aggregate can hurt individuals.

When you’re arguing with someone who doesn’t contextualize as much as you do, it can feel like arguing about useless hypotheticals. I once had someone start a point about police shootings and gun violence with “well, ignoring all of society…”. This prompted immediate groans.

When arguing with someone who doesn’t decouple as much as you do, it can feel useless and mushy. A co-worker once said to me “we shouldn’t even try and know the truth there – because it might lead people to act badly”. I bit my tongue, but internally I wondered how, absent the truth, we can ground disagreements in anything other than naked power.

Throughout the debate between Harris and Klein, both of them get frustrated at the other for failing to think like they do – which is why it provided such a clear example for Falkovich. If you read the transcripts, you’ll see a clear pattern: Klein ignores questions of truth or falsehood and Harris ignores questions of right and wrong. Neither one is willing to give an inch here, so there’s no real engagement between them.

This doesn’t have to be the case whenever people who prefer context or prefer to deal with the direct substance of an issue interact.

My theory is that everyone has a window that stretches from the minimum amount of context they like in conversations to the minimum amount of substance. Theoretically, this window could stretch from 100% context and no substance to 100% substance and no context.

But practically no one has tastes that broad. Most people accept a narrower range of arguments. Here’s what three well compatible friends might look like:

We should expect to see some correlation between the minimum and maximum amount of context people want to get. Windows may vary in size, but in general, feeling put-off by lots of decoupling should correlate with enjoying context.


 Here we see people with varyingly sized strike zones, but with their dislike of context correlated with their appreciation for substance.

Klein and Harris disagreed so unproductively not just because they give first billing to different things, but because their world views are different enough that there is absolutely no overlap between how they think and talk about things.

One plausible graph of how Klein and Harris like to think about problems (quotes come from the transcript of their podcast). From this, it makes sense that they couldn’t have a productive conversation. There’s no overlap in how they model the world.

I’ve found thinking about windows of context and substance, rather than just the dichotomous categories, very useful for analyzing how me and my friends tend to agree and disagree.

Some people I know can hold very controversial views without ever being disagreeable. They are good at picking up on which sorts of arguments will work with their interlocutors and sticking to those. These people are no doubt aided by rather wide context windows. They can productively think and argue with varying amounts of context and substance.

Other people feel incredibly difficult to argue with. These are the people who are very picky about what arguments they’ll entertain. If I sort someone into this internal category, it’s because I’ve found that one day they’ll dismiss what I say as too nitty-gritty, while the next day they criticize me for not being focused enough on the issue at hand.

What I’ve started to realize is that people I find particularly finicky to argue with may just have a fairly narrow strike zone. For them, it’s simultaneously easy for arguments to feel devoid of substance or devoid of context.

I think one way that you can make arguments with friends more productive is explicitly lay out the window in which you like to be convinced. Sentences like: “I understand what you just said might convince many people, but I find arguments about the effects of beliefs intensely unsatisfying” or “I understand that you’re focused on what studies say, but I think it’s important to talk about the process of knowledge creation and I’m very unlikely to believe something without first analyzing what power hierarchies created it” are the guideposts by which you can show people your context window.

Economics, Falsifiable

You Might Want To Blame Central Banks For Poor Wage Growth

The Economist wonders why wage growth isn’t increasing, even as unemployment falls. A naïve reading of supply and demand suggests that it should, so this has become a relatively common talking point in the news, with people of all persuasions scratching their heads. The Economist does it better than most. They at least talk about slowing productivity growth and rising oil prices, instead of blaming everything on workers (for failing to negotiate) or employers (for not suddenly raising wages).

But after reading monetary policy blogs, the current lack of wage growth feels much less confusing to me. Based on this, I’d like to offer one explanation for why wages haven’t been growing. While I may not be an economist, I’ll be doing my best to pass along verbatim the views of serious economic thinkers.

Image courtesy of the St. Louis Federal Reserve Bank. Units are 1982-1984 CPI-adjusted dollars. Isn’t it rad how the US government doesn’t copyright anything it produces?

 

 

When people talk about stagnant wage growth, this is what they mean. Average weekly wages have increased from $335 a week in 1979 to $350/week in 2018 (all values are 1982 CPI-adjusted US dollars). This is a 4.5% increase, representing $780/year more (1982 dollars) in wages over the whole period. This is not a big change.

More recent wage growth also isn’t impressive. At the depth of the recession, weekly wages were $331 [1]. Since then, they’ve increased by $19/week, or 5.7%. However, wages have only increased by $5/week (1.4%) since the previous high in 2009.

This doesn’t really match people’s long run expectations. Between 1948 and 1973, hourly compensation increased by 91.3%.

I don’t have an explanation for what happened to once-high wage growth between 1980 and 2008 (see The Captured Economy for what some economists think might explain it). But when it comes to the current stagnation, one factor I don’t hear enough people talking about is bad policy moves by central bankers.

To understand why the central bank affects wage growth, you have to understand something called “sticky wages“.

Wages are considered “sticky” because it is basically impossible to cut them. If companies face a choice between firing people and cutting wages, they’ll almost always choose to fire people. This is because long practice has taught them that the opposite is untenable.

If you cut everyone’s wages, you’ll face an office full of much less motivated people. Those whose skills are still in demand will quickly jump ship to companies that compensate them more in line with market rates. If you just cut the wages of some of your employees (to protect your best performers), you’ll quickly find an environment of toxic resentment sets in.

This is not even to mention that minimum wage laws make it illegal to cut the wages of many workers.

Normally the economy gets around sticky wages with inflation. This steadily erodes wages (including the minimum wage). During boom times, businesses increase wages above inflation to keep their employees happy (or lose them to other businesses that can pay more and need the labour). During busts, inflation can obviate the need to fire people by decreasing the cost of payroll relative to other inputs.

But what we saw during the last recession was persistently low inflation rates. Throughout the whole the thing, the Federal Reserve Bank kept saying, in effect, “wow, really hard to up inflation; we just can’t manage to do it”.

Look at how inflation hovers just above zero for the whole great recession and associated recovery. It would have been better had it been hovering around 2%.

It’s obviously false that the Fed couldn’t trigger inflation if it wanted to. As a thought experiment, imagine that they had printed enough money to give everyone in the country $1,000,000 and then mailed it out. That would obviously cause inflation. So it is (theoretically) just a manner of scaling that back to the point where we’d only see inflation, not hyper-inflation. Why then did the Fed fail to do something that should be so easy?

According to Scott Sumner, you can’t just look at the traditional instrument the central bank has for managing inflation (the interest rate) to determine if its policies are inflationary or not. If something happens to the monetary supply (e.g. say all banks get spooked and up their reserves dramatically [2]), this changes how effective those tools will be.

After the recession, the Fed held the interest rates low and printed money. But it actually didn’t print enough money given the tightened bank reserves to spur inflation. What looked like easy money (inflationary behaviour) was actually tight money (deflationary behaviour), because there was another event constricting the money supply. If the Fed wanted inflation, it would have had to do much more than is required in normal times. The Federal Reserve never realized this, so it was always confused by why inflation failed to materialize.

This set off the perfect storm that led to the long recovery after the recession. Inflation didn’t drive down wages, so it didn’t make economic sense to hire people (or even keep as many people on staff), so aggregate demand was low, so business was bad, so it didn’t make sense to hire people (or keep them on staff)…

If real wages had properly fallen, then fewer people would have been laid off, business wouldn’t have gotten as bad, and the economy could have started to recover much more quickly (with inflation then cooling down and wage growth occurring). Scott Sumner goes so far to say that the money shock caused by increased cash reserves may have been the cause of the great recession, not the banks failing or the housing bubble.

What does this history have to do with poor wage growth?

Well it turns out that companies have responded to the tight labour market with something other than higher wages: bonuses.

Bonuses are one-time payments that people only expect when times are good. There’s no problem cutting them in recessions.

Switching to bonuses was a calculated move for businesses, because they have lost all faith that the Federal Reserve will do what is necessary (or will know how to do what is necessary) to create the inflation needed to prevent deep recessions. When you know that wages are sticky and you know that inflation won’t save you from them, you have no choice but to pre-emptively limit wages, even when there isn’t a recession. Even when a recession feels fairly far away.

More inflation may feel like the exact opposite of what’s needed to increase wages. But we’re talking about targeted inflation here. If we could trust humans to do the rational thing and bargain for less pay now in exchange for more pay in the future whenever times are tight, then we wouldn’t have this problem and wages probably would have recovered better. But humans are humans, not automatons, so we need to make the best with what we have.

One of the purposes of institutions is to build a framework within which we can make good decisions. From this point of view, the Federal Reserve (and other central banks; the Bank of Japan is arguably far worse) have failed. Institutions failing when confronted with new circumstances isn’t as pithy as “it’s all the fault of those greedy capitalists” or “people need to grow backbones and negotiate for higher wages”, but I think it’s ultimately a more correct explanation for our current period of slow wage growth. This suggests that we’ll only see wage growth recover when the Fed commits to better monetary policy [3], or enough time passes that everyone forgets the great recession.

In either case, I’m not holding my breath.

Footnotes

[1] I’m ignoring the drop in Q2 2014, where wages fell to $330/week, because this was caused by the end of extended unemployment insurance in America. The end of that program made finding work somewhat more important for a variety of people, which led to an uptick in the supply of labour and a corresponding decrease in the market clearing wage. ^

[2] Under a fractional reserve banking system, banks can lend out most of their deposits, with only a fraction kept in reserve to cover any withdrawals customers may want to make. This effectively increases the money supply, because you can have dollars (or yen, or pesos) that are both left in a bank account and invested in the economy. When banks hold onto more of their reserves because of uncertainty, they are essentially shrinking the total money supply. ^

[3] Scott Sumner suggests that we should target nominal GDP instead of inflation. When economic growth slows, we’d automatically get higher inflation, as the central bank pumps out money to meet the growth target. When the market begins to give way to roaring growth and speculative bubbles, the high rate of real growth would cause the central bank to step back, tapping the brakes before the economy overheats. I wonder if limiting inflation on the upswing would also have the advantage of increasing real wages as the economy booms? ^

Economics, Politics

You’re Doing Taxes Wrong: Consumptive vs. Wealth Inequality

When you worry about rising inequality, what are you thinking about?

I now know of two competing models for inequality, each of which has vastly different implications for political economy.

In the first, called consumptive inequality, inequality is embodied in differential consumption. Under this model, there is a huge gap between Oracle CEO Larry Ellison (net worth: $60 billion), with his private islands, his yacht, etc. and myself, with my cheap rented apartment, ten-year-old bike, and modest savings. In fact, under this model, there’s even a huge gap between Larry Ellison with all of his luxury goods and Berkshire Hathaway CEO Warren Buffett (net worth: $90.6 billion), with his relatively cheap house and restrained tastes.

Pictured: Warren Buffett’s house vs. Larry Ellison’s yacht. The yacht is many, many times larger than the house. Image credits: TEDizen and reivax.

Under the second model, inequality in new worth or salary is all that matters. This is the classic model that gives us the GINI coefficient and “the 1%”. Under this model, Warren Buffett is the very best off, with Larry Ellison close behind. I’m not even in contention.

I’ve been thinking a lot about inequality because of the recent increase in the minimum wage in Ontario. The reasons behind the wage hike – and similar economic justice proposals (like capping CEO pay at some double-digit multiple of worker pay) – seem to show a concern for consumptive inequality.

That is to say, the prevailing narrative around inequality is that it is bad because:

  1. Rich people are able to consume in a way that is frankly bananas and often destructive either to the environment or norms of good governance
  2. Workers cannot afford all basic necessities, or must choose between basic necessities and thinking long term (e.g. by saving for their children’s education or their own retirement)

Despite this focus on consumptive inequality in public rhetoric, our tax system seems to be focused primarily on wealth inequality.

Now, it is true that wealth inequality can often lead to consumptive inequality. Larry Ellison is able to consume to such an obscene degree only because he is so obscenely wealthy. But it is also true that wealth inequality doesn’t necessarily lead to consumptive inequality (there are upper middle-class people who have larger houses than Warren Buffett) and that it might be useful to structure our tax policy and other instruments of political economy such that there was a serious incentive for wealth inequality not to lead to consumptive inequality.

What I mean is: it’s unlikely that we’re going to reach a widely held consensus that wealth is immoral (or at what level it becomes immoral). But I think we already have a widely held consensus that given the existence of wealth, it is better to wield it like Mr. Buffett than like Mr. Ellison.

To a certain extent, we already acknowledge this. In Canada, there are substantial tax advantages to investing up 18% of your yearly earnings (below a certain point) and giving up to 75% of your income to charity. That said, we continue to bafflingly tax many productive uses of wealth (like investing), while refusing to adequately tax many frivolous or actively destructive uses of wealth (large cars, private jets, private yachts, influencing the political process, etc.).

Many people, myself included, find the idea of large amounts of wealth fundamentally immoral. Still, I’d rather tax the conspicuous and pointless use of wealth than wealth itself, because there are many people motivated to do great things (like curate all of the world’s information and put it at our fingertips) because of desire for wealth.

I’m enough of a post-modernist to worry that any attempt to create a metric of “social value” will further disenfranchise people who have already been subject to systemic discrimination and fail to reflect the tastes of anyone younger than 35 (I just can’t believe that a bunch of politicians would get together and agree that anyone creates social value or deserves compensation for e.g. cosplay, even though I know many people who find it immensely valuable and empowering).

That’s the motivation. Now for the practice. What would a tax plan optimized to punish spurious consumption while maintaining economic growth even look like? Luckily Scott Sumner has provided an outline, the cleverness of which I’d like to explain.

No income tax

When you take money from people as taxes, then give it back to them regardless of how hard they work, you discourage work. It turns out that this effect is rather large, such that the higher income taxes are, the more you discourage people from working. People working is a necessary prerequisite for economic growth and I view economic growth as largely positive (in that it is very good at engendering happiness and stability, as well as guaranteeing those of us currently working the possibility of retiring one day and generating revenues for a social safety net) and therefore think we should try and tax in a way that doesn’t discourage this.

No corporate tax

Another important component of economic growth is investment. We can imagine a hypothetical economy where absolutely everything that is produced is consumed, such that much is made, but nothing ever really changes. The products available this year will be the products available next year, at the same price and made in the same factory, with any worn-down equipment replaced, but no additional equipment purchased.

Obviously, this is a toy example. But if you’ve bought a product this year that didn’t exist last year, or noticed the cost of something you regularly buy fall, you’ve reaped the rewards of investment. We need people to deliberately set aside some of the production they’re entitled too via possession of money so that it can instead be used to improve the process of production.

Corporate taxes discourage this by making investment less attractive. In fact, they actively encourage consumptive inequality, by making consumption artificially cheaper than investment. This is the exact opposite of what we should be aiming for!

Interestingly, there have been a variety of report positive results of the recent cut in corporate tax rates in the US, from repatriation of money for US investment to bonuses for workers.

Now, I know that corporate taxes feel very satisfying. Corporations make a lot of money (although probably less than you think!) and it feels right and proper to divert some of that for public usage. But there are better ways of diverting that money (some of which I’ll talk about below) that manage to fill the public coffers without incentivizing behaviour even worse than profit seeking (like bloated executive pay; taxing corporate income makes paying the CEO a lot artificially cheap). Corporate taxes also hurt normal people in a variety of ways – like making saving for retirement harder.

No inheritance tax

This is another example of artificially making consumption more attractive. Look at it this way: you (a hypothetical you who is very wealthy) can buy a yacht now, use it for a while, loan it to your kids, them have them inherit it when it’s depreciated significantly, reducing the tax they have to pay on it. Or you can invest so that you can give your children a lot of money. Most rich people aren’t going to want to leave nothing behind for their children. Therefore, we shouldn’t penalize people who are going to use the money for non-frivolous things in the interim.

A VAT (with rebates or exemptions)

A VAT, or value added tax, is a tax on consumption; you pay it whenever you buy something from a store or online. A “value-added” tax differs from a simple sales tax in that it allows for tax paid to suppliers to be deducted from taxes owed. This is necessary so that complex, multi-step products (like computers) don’t artificially cost more than more simple products (like wood).

Scott Sumner suggests that a VAT can be easily made free for low-income folks by automatically refunding the VAT rate times the national poverty income to everyone each year. This is nice and simple and has low administrative overhead (another key concern for a taxation system; every dollar spent paying people to oversee the process of collecting taxes is a dollar that can’t be spent on social programs).

An alternative, currently favoured in Canada, is to avoid taxing essentials (like unprepared food). This means that people who spend a large portion of their money on food are taxed at a lower overall rate than people who spend more money on non-essential products.

A steeply progressive payroll tax

If income inequality is something you want to avoid, I’d argue that a progressive payroll tax is more effective than almost any other measure. This makes companies directly pay the government if they wish to have high wage workers and makes it more politically palatable to raise taxes on upper brackets, even to the point of multiples of the paid salary.

While this may seem identical to taxing income, the psychological effect is rather different, which is important when dealing with real people, not perfectly rational economics automata. Payroll taxes also make tax avoidance via incorporating impossible (as all corporate income, including dividends after subtracting investment would be subject to the payroll tax) and makes it easy to really punish companies for out of control executive compensation. Under a payroll tax system, you can quite easily impose a 1000% tax on executive compensation over $1,000,000. It’s pretty hard to justify a CEO salary of $10,000,000 when it’s costing investors more than a hundred million dollars!

Scott Sumner also suggests wage subsidies as an option to avoid the distortionary effect of a minimum wage [1], a concept I’ve previously explored in depth and found to be probably workable.

A progressive property tax

Property taxes tend to be flat, which makes them less effective at discouraging conspicuous consumption (e.g. 4,500 square foot suburban McMansions). If property taxes sharply ramped up with house value or size, families that chose more appropriately sized homes (or could only afford appropriately sized home) would be taxed at lower rates than their profligate neighbours. Given that developments with smaller houses are either higher density (which makes urban services cheaper and cars less necessary) or have more greenspace (which is good from an environmental perspective, especially in flood prone areas), it’s especially useful to convince people to live in smaller houses.

This would be best combined with laxer zoning. For example, minimum house sizes have long been a tool used in “nice” suburbs, to deliberately price out anyone who doesn’t have a high income. Zoning houses for single family use was also seized upon as a way to keep Asian immigrants out of white neighbourhoods (as a combination of culture and finances made them more likely to have more than just a single nuclear family in a dwelling). Lax zoning would allow for flexibility in housing size and punitive taxes on large houses would drive demand for more environmentally sustainable houses and higher density living.

A carbon tax

Carbon is what economists call a negative externality. It’s a thing we produce that negatively affects other people without a mechanism for us to naturally pay the cost of this inflicted disutility. When we tax a negative externality, we stop over-consumption [2] of things that produce that externality. In the specific case of taxing carbon, we can use this tax to very quickly bring emissions in line with the emissions necessary to avoid catastrophic warming.

I’d like to generalize this to Pigovian taxes beyond carbon. Alcohol (and other intoxicants), sugary drinks, and possibly tobacco should be taxed in line with their tendency to produce costs that (in countries with public risk pooling of health costs) are not borne by the individual over-consuming. I do think it’s important to avoid taking this too far – it’s reasonable to expect people to cover their negative externality, but not reasonable to punitively tax things just because a negative externality might exist or because we think it is wrong or “unhealthy” to do it. Not everything that is considered unhealthy leads to actual diseases, let alone increased healthcare costs.

A luxury goods tax

This comes from a separate post by Scott Sumner, but I think it’s a good enough idea to mention here. It should be possible to come up with a relatively small list of items that are mostly positional – that is to say that the vast majority of their cost is for the sake of being expensive (and therefore showing how wealthy and important the possessor is), not for providing increasing quality. To illustrate: there is a significant gap in functionality between a $3,000 beater car and a $30,000 new car, less of a gap between a $30,000 car and a $300,000 car and even less of a gap between the $300,000 car and a $3,000,000 car; the $300,000 car is largely positional, the $3,000,000 car almost wholly so. To these we could add items that are almost purely for luxury, like 100+ foot yachts.

It’s necessary to keep this list small and focus on truly grotesque expenditures, lest we turn into a society of petty moralizers. There’s certainly a perspective (normally held by people rather older than the participants) in which spending money on cosplay or anime merchandise is frivolous, but if it is, it’s the sort of harmless frivolity equivalent to spending an extra dollar on coffee. I am in general in favour of letting people spend money on things I consider frivolous, because I know many of the things I spend money on (and enjoy) are in turn viewed as frivolous by others [3]. However, I think there comes a point when it’s hard to accuse anyone of petty moralizing and I think that point is probably around enough money to prevent dozens of deaths from malaria (i.e. $100,000+) [4].

Besides, there’s the fact that making positional goods more expensive via taxation just makes them more exclusive. If anything, a strong levy on luxury goods may make them more desirable to some.


As I’ve read more economics, my positions on many economics issues have shifted in a way that many people parse as “more conservative”. I reject this. There are a great many “liberal” positions that sound good on paper, but when you actually do the math, hurt the poor and benefit the rich. Free trade makes things cheaper for all of us and has created new jobs and industries. A lot of regulation allows monopolies and large companies to crush any upstart rivals, or shifts jobs from blue collar workers making things to white collar workers ensuring compliance.

It is true that I care about the economy in a way that I never cared about it before. I care that we have sustainable growth that enriches us all. I care about the stock market making gains, because I’ve realized just how much of the stock market is people’s pensions. I care about start-ups forming to meet brand new needs, even when the previous generation views them as frivolous. I care about human flourishing and I now believe that requires us to have a functioning economic system.

A lot of how we do tax policy is bad. It’s based on making us feel good, not on encouraging good behaviour and avoiding weird economic distortions. It encourages the worst excesses of wealth and it’s too easy to avoid.

What I’ve outlined here is a series of small taxes, small enough to make each not worth the effort to avoid, that together can easily collect enough revenue to ensure a redistributive state. They have the advantage of cutting particularly hard against conspicuous consumption and protecting the planet from unchecked global warming. I sincerely believe that if more people gave them honest consideration, they would advocate for them too and together we could build a fairer, more effective taxation system.

Footnotes:

[1] A minimum wage can make it impossible to have Pareto optimal distributions – distributions where you cannot make anyone better off without making someone else worse off. Here’s a trivial example: imagine a company with two overworked employees, each of whom make $15/hour. The employees are working more than they particularly want to, because there’s too much work for the two of them to complete. Unfortunately, the company can only afford to pay an additional $7/hour and the minimum wage is $14/hour. If the company could hire someone without much work experience for $7/hour everyone would be better off.

The existing employees would be less overworked and happier. The new employee would be making money. The company could probably do slightly more business.

Wage subsidies would allow for the Pareto optimal distribution to exist while also paying the third worker a living wage. ^

[2] Over-consumption here means: “using more of it than you would if you have to properly compensate people for their disutility”, not the more commonly used definition that merely means “consuming more than is sustainable”.

An illustration of the difference: In a world with very expensive carbon capture systems that mitigate global warming and are paid for via flat taxes, it would be possible to be over-consuming gasoline in the economics sense, in that if you were paying a share of the carbon capture costs commensurate with your use, you’d use less carbon, while not consuming an amount of gasoline liable to lead to environmental catastrophe, even if everyone consumed a similar amount. ^

[3] For example, I spent six times as much as the median Canadian on books last year, despite the fact that there’s a perfectly good library less than five minutes from my house. I’m not particularly proud of this, but it made me happy. ^

[4] I am aware of the common rejoinder to this sort of thinking, which is basically summed up as “sure, a sports car doesn’t directly feed anyone, but it does feed the workers who made it”. It is certainly true that heavily taxing luxury items will probably put some people out of work in the industries that make them. But as Scott Sumner points out, it is impossible to meaningfully fix consumptive inequality without hurting jobs that produce things for rich people. If you aren’t hurting these industries, you have not meaningfully changed consumptive inequality!

Note also that if we’re properly redistributing money from taxes that affect rich people, we’re not going to destroy jobs, just shift them to sectors that don’t primarily serve rich people. ^