Aspiring author, sometimes blogger. By day, I’m a Software Developer at Alert Labs. By night I write things. Both of these look the exact same to an outside observer, because it’s just me sitting in front of a computer screen, hitting buttons.
The Battle of the Tsushima Straits is the most underrated moment of historical importance in the 20th century.
We’ve all heard lots of different explanations for the start of the First World War. The standard ones are as follows: Europe was a mess of alliances, imperial powers treated war like a game, and one unlucky arch-duke got offed by anarchists.
Less commonly mentioned is Russia’s lack of international prestige, a situation that made it desperate for military victories at the same time it made the Central Powers contemptuous of Russia’s strength.
Russia was the first country to mobilize in 1914 (with its “period preparatory to war”) after Austria issued an ultimatum to Serbia and it was arguably this mobilization that set the stage for a continent spanning war.
Why was Russia so desperate and the Central Powers so unworried?
Well, over 24 hours on May 27/28th, 1905, Russia went from the 3rd most powerful naval nation in the world to one that could have barely hoped to defeat the Austro-Hungarian Empire at sea (that doesn’t sound bad, until you remember that Austria-Hungary has no blue water harbours and never really had any overseas colonies). This wrecked Russian prestige.
What destroyed the Russian fleet so thoroughly?
Admiral Tōgō and the Imperial Japanese fleet.
In the Battle of the Tsushima Straits, Admiral Tōgō defeated and sunk or captured eleven battleships and twenty-seven other ships – practically every Russian naval vessel – at the cost of three torpedo boats (the smallest and cheapest ships used in early 20th century naval combat).
This lopsided victory was the first time a European power was conclusively beaten by an Asian one in an even battle since the Mongol general Subutai razed Hungary and smashed the armies of Poland in the 1200s.
Victory galvanized Japan. Barely fifty years before the battle, Japan had been forced open at gunpoint by Commadore Perry’s Black Ships. Shortly after this, western powers forced Japan, like China before it, to sign unequal treaties. Victory at the Battle of Tsushima showed that this era was clearly over. Japan was now a great power.
This is why I could claim that the Battle of the Tsushima Straits is the most underrated moment of historical importance in the 20th century. Not only did Russia’s defeat sow some of the seeds of the First World War; Japan’s victory also set the stage for Japan’s participation in the Second World War.
Admiral Tōgō’s message to Tokyo on the day of the battle, “In response to the warning that enemy ships have been sighted, the Combined Fleet will immediately commence action and attempt to attack and destroy them. Weather today fine but high waves.”, especially the last part, became as important to the Japanese Navy as Nelson’s remarks before Trafalgar (“England expects that every man will do his duty”) were to the British.
With such a lopsided victory under their belt, the Imperial Japanese Navy began to believe that they were invincible. They quickly became promoters of militarism and conquest.
As America began to act to check Japanese dominance in the Pacific and prevent Japan from entirely colonizing China, the Japanese Navy decided that America had to be defeated. This led to Japan taking Germany’s side in the Second World War, to Pearl Harbour, and eventually to the American occupation of Japan.
Had the Battle of the Tsushima Strait instead been a bloody stalemate, Japan may have risen less quickly and more cautiously. Russia may not have started the First World War when it did, nor succumbed to a revolution when exhausted by the same war. The Soviet Union may never have risen. Both World Wars may have happened differently, or not at all.
This is not even to mention that British naval observers at the battle used what they learned in the construction of Dreadnaught, the battleship that started a new naval arms race.
There’s too much that spilled from all of these events to predict if the world would be better or worse if Tōgō hadn’t won in 1905, but it certainly would have been different.
Today is a good day to reflect on how this single battle, the only decisive time battleships ever met in anger, helped to shape so much of the modern world. If this single moment, unknown to so many, shaped so much of what came later, what other key moments are we ignorant of? What other desperate struggles and last second decisions shaped this baffling world of ours?
History doesn’t just belong to the victors. It belongs to those who are remembered. Today, I’d like to remind you that even if events fall from history and aren’t remembered, they can still shape it.
When I write about economics on this blog, it is quiteoften from the perspective of monetary economics. I’ve certainly made no secret about how important monetary economics is to my thinking, but I also have never clearly laid out the arguments that convinced me of monetarism, let alone explained its central theories. This isn’t by design. I’ve found it frustrating that many of my explanations of monetarism are relegated to disjointed footnotes. There’s almost an introduction to monetarism already on this blog, if you’re willing to piece together thirty footnotes on ten different posts.
It is obviously the case that no one wants to do this. Therefore, I’d like to try something else: a succinct explanation of monetary economics, written as clearly as possible and without any simplifying omissions or obfuscations, but free of (unexplained) jargon.
It is my hope that having recently struggled to shove this material into my own head, I’m well positioned to explain it. I especially hope to explain it to people broadly similar to me: people who are vaguely left-leaning and interested in economics as it pertains to public policy, especially people who believe that public policy should have as its principled aim ensuring a comfortable and dignified standard of living for as many as possible (especially those who have traditionally been underserved or abandoned by the government).
To begin, I should define monetarism. Monetarism is the branch of (macro-)economic thought that holds that the supply of money is a key determinant of recessions, depressions, and growth (in whole, the “business cycle”, the pattern of boom and bust that characterizes all market economies that use money).
Why does money matter?
In general, during both periods of growth and recessions, the supply of money increases. However, there have been several periods of time in America where the supply of money has decreased. Between the years of 1867 and 1963, there were eight such periods. They are: 1873-1879, 1892-1894, 1907-1908, 1920-1921, 1929-1933, 1937-1938, 1948-1949, and 1959-1960.
When I first read those dates, I got chills. Those are the dates of every single serious contraction in the covered years.
Furthermore, while minor recessions aren’t characterized by a decrease in the supply of money, they are characterized by a decrease in the rate of the growth of the money supply. That is to saw, the money supply is still increasing, but by less than it normally does.
Let’s pause for a second and talk about the growth of the money supply. Why does it normally grow?
Under the international gold standard, which existed in modern times under one form or another until President Nixon de facto ended it in 1971, money either existed as precious metal coins (specie), or paper banknotes backed by specie. If you had a dollar in your wallet, you could convert it to a set amount of gold.
As long as gold mining was economically viable (it was in the period covering 1867-1963, which we’re talking about), there was, in general, steady growth in the money supply. Each dollar’s worth of gold pulled out of the ground made it possible to expand the monetary supply by a similar amount, although I should note that not all gold that was mined was used this way (some was used, for example, to make jewelry).
Since the end of the gold standard, governments have made a commitment to keeping the money supply steadily increasing. We commonly refer to this as “printing money”, but that’s a bit of an anachronism. Central banks create money by buying assets (like government debt) using money that did not previously exist. This process is digital .
(We call currencies that aren’t backed by precious metals or other commodities “fiat” currencies, because their value exists, at least in part, because of government fiat.)
In both fiat and commodity currency regimes, there is a clear correlation between changes in the growth rate of the money supply and the growth rate of the economy. A decrease in money supply growth leads to a recession. An outright decrease in money supply (i.e. negative growth) leads to a depression. Even within the categories (depression and recession), there’s a correlation. The worse the decline in growth rate, the worse the downturn.
Whenever someone provides an interesting correlation, it is important to ask about causation. It does not necessarily need to be the case that a decrease in money supply is what is causing recessions. It could instead be that recessions cause the decrease in the rate of money growth, or that money supply is a lagging indicator of recessions (as unemployment is), rather than a leading one .
There are four reasons to suspect that money is in fact the causal factor in business cycles.
First, there is the simple fact that history suggests a causal relationship. We do not see any history of central banks (which remember, help control the money supply) reacting to economic recession with plans to cut the supply of money. On the other hand, we have seen recessions which were started when central banks have deliberately decreased the growth of the money supply, as the Federal Reserve Chairman Paul Volcker did in 1980.
Second, it is possible to do correlational analyses to determine if it is more probable that something is a leading or lagging indicator. Anna Schwartz and Milton Friedman did just such an analysis on data from US recessions and depressions between 1867 and 1963 and found correlation only with money as a leading indicator.
Third, money is much better positioned to explain recessions and depressions than the alternative (Keynesian) theory which holds that recessions occur due to a fall in investment. The correlation between the amount of investment and the amount of economic growth in America (again, between 1867 and 1963) disappears when you control for changes in the money supply. The correlation between money and growth remains, even when controlling for investment.
Fourth, we do not need to be a priori skeptical of money as a key determinant of the business cycle. Money is clearly linked to the economy; it literally permeates it. The business cycle of growth followed by recession is observed only in economies that use money . While it would make sense to be inherently skeptical of a theory that holds that recessions occur when not enough sewing needles are produced, we need to be much less reflexively skeptical of money. Claiming money causes the business cycle isn’t like claiming Nicholas Cage movies cause accidental drowning.
These arguments are necessarily summaries; this blog post isn’t the best place to put all of the graphs and regression analyses that Schwartz and Friedman did when first formulating their theory of monetary economics. I’ve read through the analysis several times and I believe it to be sound. If you wish to pore over regressions yourself, I recommend the paper Money and Business Cycle (1963).
If you can accept that the supply of money plays a key role in the business cycle, you’ll probably find yourself in possession of several questions, not the least of which will be “how?”. That’s a good question! But before I can explain “how”, I first need to define money, explain how banking works, and delve into the role and abilities of the central bank. It will be worth it, I promise.
What is money?
At first blush, this is a silly question. Money is one of those things we know when we see. It’s the cash in our wallets and the accounts at our banks. Except, it’s not quite that.
Money isn’t a binary category. Things can have varying amounts of “moneyness”, which is to say, can be varyingly good at accomplishing the three functions of money. These three functions are: a store of value (something that can be exchanged for goods in the future), a unit of account (something that you can use to keep track of how many goods you could buy), and a medium of exchange (something that you can give to someone in exchange for goods).
While bank deposits and cash are obviously money, there are also a variety of financial products that we tend to consider money even though they have less moneyness than cash. For example, robo-investment accounts (of the sort that my generation uses) often given the illusion of containing cash by being denominated in dollars and allowing withdrawals . What makes them have less moneyness than cash is only apparent when you look under the hood and realize they contain a mixture of stocks and loans.
In a monetary context, when we say “money”, we aren’t referring to investment accounts or any other instrument that pretends to be cash . Instead, we’re referring to the “money supply”, which is made up of instruments with very high moneyness and is determined by three factors:
The monetary base. This is the money that the central bank issues. We see it as cash, as well as the reserves that regular banks choose to hold.
The amount of reserves banks keep against deposits. Later this will show up as the deposit-reserve ratio, which is calculated by dividing total deposits by the reserves kept on hand by banks.
How much of its currency the public chooses to deposit at banks. This will surface later as the deposit-currency ratio. This is calculated by dividing the value of all deposit accounts at banks by the total amount of currency in circulation.
What are reserves?
When you give your money to a bank, it doesn’t hold all of it in a vault somewhere. Vaults are expensive, as are guards, tellers, and account software. If banks held onto all of your cash for you, you’d have to pay them quite a lot of money for the service. Many of us would decide it’s not worth the bother and keep our cash under the proverbial mattress.
Banks realized this a long time ago. They responded like any good business – by finding a way to cut costs for the consumer.
Banks were able to cut costs by realizing that it is very rare for everyone to want all of their money back at once. If banks didn’t need to keep all of the deposited cash (or, in the olden days, gold and silver specie) on hand, they could lend some of it out and use the interest it earned to subsidize the cost of running the bank.
This led to the birth of the fractional reserve system, so named because bank reserves are a fraction of the money deposited in banks .
Once you have a fractional reserve system, a funny thing happens with the money supply: it is no longer made up solely by money created by the central bank. When commercial banks lend out money that people have deposited, they essentially create money. This is how the money supply ends up depending on the deposit-reserve ratio; this ratio describes how much money banks are creating.
When banks decide to lend out more of their reserves, the deposit-reserve ratio increases and the money supply increases. When banks instead decide to lend out less and sit on their cash, the deposit-reserve ratio decreases and the money supply decreases.
But it isn’t just the banks that get a vote in the money supply under a fractional reserve system. Each of us with a bank account also gets a vote. If we trust banks or if we’re enticed by a high interest rate, we hold less cash and put more money in our bank accounts (which causes the deposit-currency ratio – and therefore the money supply – to increase). If we’re instead worried about the stability of banks or if bank accounts aren’t paying very appealing interest rates, we’ll tend to hold onto our cash (decreasing the deposit-currency ratio and the total supply of money).
Holding the deposit-reserve ratio constant, the money supply increases when the deposit-currency ratio increases and decreases when the deposit-currency ratio decreases. This is because every dollar in the bank becomes, via the magic of fractional reserve banking, more than a single dollar in the money supply. Your deposit remains available to you, but most of it is also lend out to someone else.
While we cannot in practice hold any ratio constant, there do exist real constraints on the deposit-reserve ratio. In the US, there are laws that require banks above a certain size to keep liquid reserves equal to at least 10% of their deposits. Many other countries lack reserve requirements per se, but do require banks to limit how leveraged they become, which acts as a de facto limit on their deposit-reserve ratio .
It isn’t just the government that provides restraints. Banks may have internal policies that require them to have lower (safer) deposit-reserve ratios the government demands.
Governments and bank risk management departs set limits on the deposit-reserve ratio in an attempt to limit bank failures, which become more likely the higher the deposit-reserve ratio gets. Banks don’t really sit on all of their reserves, or even stuff it in vaults. Instead, they normally use it to buy assets that they and the government agree are safe. Often this takes the form of government bonds, but sometimes other assets are considered suitable. Many of the mortgage backed securities that exploded during the financial crisis were considered suitably safe, which was a major failure of the ratings agencies.
If assets banks have bought to act as their reserves lose value, they can find their deposit-reserve ratio higher than they want it to be, which often results in a sudden decline in loan activity (and therefore a decline in the growth rate of the money supply) as they try to return their financials to normal . Bank failures can occur if deposit-reserve ratios get so far from normal that banks cannot afford to meet normal withdrawal requests.
If people and banks have so much control over the money supply, what do central banks do?
What central banks do depends on their mandate; what the government has told them to do. The US Federal Reserve Bank has a dual mandate: to maintain a stable price level (here defined as inflation of approximately 2%) and to ensure full employment (defined as an unemployment rate of around 4.5% ). The Fed is actually a bit of an aberration here. Many central banks (like Canada’s) have a single mandate: “to keep inflation low, predictable, and stable”.
Currently, central banks achieve their mandate by manipulating interest rates. They do this with a “target rate” and “open market operations”. The target rate is the thing you hear about on TV and in the news. It’s where the central bank would like interest rates to be (here, interest rates really means “the rate at which banks lend each other money”; consumers can generally expect to make less interest on their savings and pay more when they take out loans ).
Note that I’ve said the target rate is where the central bank would “like” interest rates to be. It can’t just call up every bank and declare the new interest rate by fiat. Instead, it engages in those “open market operations” that I mentioned. There are two types of open market operations.
When the interest rate is above target, the central bank lends money to banks at below-market interest rates (to increase the supply of money and encourage interest rates to become lower). When the interest rate is below target, the central bank will begin selling assets to banks (to give banks something else to do with their money and thereby make them demand more interest from each other when loaning).
Open market operations are normally fairly successful at keeping the interest rate reasonably close to the target rate.
Unfortunately, the target rate is only moderately effective at achieving monetary policy goals.
Remember, the correlation we identified in the first section is for the total supply of money, not for the interest rate. There’s some correlation between the two (lower interest rates can mean a fast monetary growth rate), but it isn’t exact.
When you hear people on TV say that “low interest rates mean easy money” (“easy money” means variously “high growth in the money supply” or “growth in the money supply likely to cause above-target inflation”) or “high interest rates mean tight money” (a shrinking money supply; below target inflation), you are hearing people who don’t entirely understand what they’re talking about.
The key piece of information reporters often lack is how much demand banks have for money. If banks don’t really want much more money (perhaps because the economy is tanking and there’s nothing to do with money that will justify loan repayments) then a low interest rate can still result in the money supply barely growing. It may be that the central bank target rate is quite low by historical standards (say 1%) but still not low enough to expand the money supply via loans to banks.
Put another way, while a 1% interest rate is always easier than a 2% interest rate, there’s often nothing to tell a priori if it represents easy money, which is to say, growth in the money stock. A 1% target rate can be contractionary (shrink the money stock) if banks won’t take out loans when charged it.
Conversely, a 10% interest rate could conceivably represent easy money if banks are still taking out lots of loans at that rate. Take a case where there’s some asset currently returning 20% every year. Under those circumstances, 10% interest payments are a steal and the money supply would continue to increase. It’s certainly tighter money than a 2% interest rate, but it’s not always tight money.
If you want to see if the target interest rate is inflationary or deflationary, you should look at the market’s expectations for inflation. If the market is predicting higher than target inflation, money is easy. If it’s predicting below target inflation, money is tight.
Central banks often collect statistics so that they can judge the effectiveness of their policy actions. If inflation is too low, they’ll lower their target rate. Too high, and they’ll raise it. Over time, if the economy is stable, central banks will correct any short run problems introduced by interest rate targeting and eventually zero in on their inflation target. Unfortunately, this leaves the door open to painful short-term failures.
How do central banks fail in the short run?
First, I want to make it clear that short-term failures are bad. While long-term price stability is definitely a good thing, short-term fluctuations in the money supply can lead to recessions (remember our solid correlation between shrinking money supply and recessions). Even relatively minor short-term failures can have consequences for hundreds of thousands or millions of people whenever recessions lead to job losses.
Central banks most commonly fail in the short-run because of some sort of unexpected shock. Most commonly, shocks that lead to long recessions originate in the financial sector. The 2001 dot-com crash, for example, didn’t technically lead to a recession in the United States, despite the huge stock market losses .
Shocks to the financial sector are unusually likely to cause recessions because of the key role that the financial sector plays in determining the monetary supply (via the deposit-reserve ratio we discussed above), as well as the key role that confidence in the financial sector plays (via the deposit-currency ratio).
When financial institutions run into trouble, they have to scramble for liquidity – for cash that they can have on hand in case people wish to withdraw their money  – which means they make fewer loans. Suddenly, the money multiplier that banks supply shrinks and the amount of money in the economy decreases.
Things can get even worse when the public loses faith in the banking system. If you suspect that a bank might fail, you will want to get your money out while you still can. Unfortunately, if everyone comes to believe this, then the bank will fail . By design, it doesn’t have enough cash on hand to pay everyone back . When this happens, it is called a “run” on the banks or a “bank run” and they’re thankfully becoming more and more rare. Many developed countries have ended them entirely with a program of deposit insurance. That’s the stickers you see on the door of your bank that promises your deposits will be returned to you, even if the bank fails .
It’s good that we’ve stopped bank runs, because they’re incredibly deflationary (they are very good at shrinking the money supply). This is due to the deposit-currency ratio being a key determinant of the total money supply. When people stop using banks, the deposit-currency ratio falls and the money supply decreases.
Since bank failures can occur quite suddenly and can spread throughout the financial system quickly, a financial crisis can cause a deflation that is too rapid for the central bank to react too. This is especially true because modern central banks have a general tendency to fear inflation much more than many monetarists believe they should. This is really unfortunate! A slow response to a decrease in the growth of the money supply (whether caused by a financial crisis or something else) can easily turn into a recession or depression, with all the attendant misery.
Okay, but can you explain how this happens?
Many individuals and companies like to keep a certain amount of money on hand, if at all possible. When they have less money than this, they economize, until they feel comfortable with the amount of money they have. When they have more money, the tend to invest it or spend it.
When the money supply increases, either via by the central bank buying bonds, the government reducing reserve requirements, or people deciding to hold more of their money at banks, there are suddenly larger supplies of money at banks then they would like to hold on to.
Banks then spend this money (or invest it, which is essentially giving it to someone else to spend). The people banks give the money to immediately face the same problem; they have more money than they plan on holding. What follows is a game of hot potato, as everyone in the economy tries to keep their account balances where they want them (by spending money).
If there is free capacity in the economy (e.g. factories are idle, people are unemployed, etc.), then this free capacity eventually absorbs the money (that is to say: people who had less money on hand then they desired are quite happy to grab and hold onto the extra money). If there is very little free capacity in the economy however (i.e. unemployment is low, production high), then this money really cannot be spent to produce anything extra. Instead, we have more money, chasing the same amount of goods and services. The end result of that is prices increasing – what we call inflation – or, just as correctly, money becoming worth less.
Once prices rise, people realize they need to hold onto slightly more money and a new equilibrium is reached.
After all, the money that people are holding onto is really acting as a unit of account. It denotes how many days (or weeks, or months) of consumption they want to have easy access to. Inflation changes how much money you need to hold onto to keep the same number of days (weeks, months) of production .
Now, let’s run this whole thing in reverse. Instead of increasing the supply of money, the money supply is decreasing (or failing to grow at the expected rate). Maybe there were new reserve requirements, or a financial crash, or the central bank misjudged the amount of money it needed to create . Regardless of how it happens, someone who was expecting to get some money isn’t going to get it.
This person (bank, corporation) will find themself having less cash on hand then they hoped for and will cut back on their spending. This spending was going to someone else who was hoping for it. And suddenly the whole economy is trying to collectively spend less money, which it can’t do right away.
Instead, money becomes relatively more valuable as everyone scrambles for it. This looks like prices going down.
The price of labour (wages), might, in theory be expected to go down, but in practice it doesn’t. It’s very emotionally taxing to try and convince many employees to accept pay cuts (in addition to being bad for morale), so firms tend to prefer pay freezes, cutting back on contract labour, switching some workers to part-time, and firings to pay cuts .
Decreased growth in the rate of money affects more than just workers. Factories close or sit idle. Economic capacity diminishes. Ultimately, the whole economy can spend less, if some of the economy is gone.
All of these taken together are the hallmarks of recession. We see job losses, idle capacity, and closures. And we can directly point at failures of central bank policy as the culprit.
Can changes in the growth rate of money affect anything else?
There are three interesting relationships between inflation and employment.
First, it seems that higher than expected inflation leads to increased employment. Friedman and Schwartz speculated that this occurs because corporations are better positioned to see inflation than workers. When they see evidence of inflation, they can quickly hire workers at previously normal salaries. These salaries represent something of a discount when there’s unexpected inflation, so it’s quite a steal for the companies.
Unfortunately, this effect doesn’t persist. As soon as everyone understands that inflation has increased, they bake this into their expectations of salaries and raises. Labour stops being artificially cheap, and companies may end up letting go of some of the newly hired workers.
Second, it seems that increasing money supply is correlated with increasing real wages, that is, wages that are already adjusted for inflation. While it makes sense that inflation will lead to an increase in nominal wages (that is, inflation leads to higher salaries, even if those salaries cannot buy anything extra), it’s a bit odder that it leads to higher real wages. I haven’t yet seen an explanation for why this is true, but it’s an interesting tidbit and one I hope to understand better in the future .
Finally, inflation can play an important role in avoiding job losses. Not all economic downturns are caused by central banks. Sometimes, the shock is external (like an earthquake, commodity crash, or a trade embargo). In these cases, certain sectors of the economy may be facing losses and may respond with firing (as we saw above, wage cuts are rarely considered a tenable option). However, inflation can act as an implicit wage cut and stop job losses long enough for the economy to adjust.
If salaries are kept constant while inflation continues apace (or even increases), they become relatively less expensive, all without the emotional toll that wage cuts take. This can protect jobs and engineer a “soft landing”, where a shock doesn’t lead to any large-scale job losses.
Obviously, this has to be temporary, so as not to erode the purchasing power of workers too much, but most shocks are temporary, so this is not a difficult constraint.
Okay, what does this say about policy?
There are three main policy takeaways from this post.
First, interest rates are a bad policy indicator. It’s hard for people to break their association between easy money and low interest rates, which means monetary policy is likely to end up too tight. The best analogy I’ve heard for interest rates are a steering wheel that sometimes points a bus left when turned left and sometimes points the bus left when turned right. If you wouldn’t get in a bus driven like that, you shouldn’t be thrilled about being in an economy that’s being driven in the exact same way.
Second, a stable monetary policy is very useful. Note that stable monetary policy implies neither stable interest rates, nor stable inflation. Rather, a stable monetary policy means that everyone can have confidence that the central bank will act in predictable and productive ways. Stable monetary policy smooths out the peaks and valleys of the business cycle. It stops highs from becoming too speculative and keeps lows from leading to terrible grinding unemployment. It also lets unions and workers bargain for long-term wage increases and allows companies to grant them without being scared they’ll become unsustainable due to below-target inflation.
Third, expectations are a powerful tool. If banks believe that the central bank will print lots of money (and buy lots of assets) during a crisis, they won’t have to stop making loans, or increase their reserves. Sometimes, the mere expectation of a forceful government intervention prevents any need for the intervention (like with deposit insurance; it rarely pays out because its existence has drastically reduced the need for it). Had the Federal Reserve reacted more aggressively to the financial crisis, it may have been possible to avoid the massive bailout to financial companies.
I know that “the money supply” will never be a progressive priority. But I think it’s a thing that progressives should care about. Billionaires may not like bad monetary policy, but they aren’t the ones who feel the brunt of its failure. Those are the workers who are laid off, or the pensioners who lose their savings.
I hope I’ve made the case that in order to care about them, we need to care about how money works.
Further Reading and Sources
I drew heavily on Money in Historical Perspective, by Anna J. Schwartz when writing this blog post. The papers Money and Business Cycles (1963, with Milton Friedman), Why Money Matters (1969), The Importance of Stable Money: Theory and Evidence (1983, with Michael D. Bordo), and Real and Pseudo-Financial Crises (1986) were particularly informative.
Scott Sumner’s blog The Money Illusion is an excellent resource for current monetarist thought, while J. P. Koning’s blog Moneyness provides many excellent historical anecdotes about money.
Like all of my posts about economics, this one contains way too many footnotes. These footnotes are mainly clarifying anecdotes, definitions, and comments. I’ve relegated them here because they aren’t necessary for understanding this post, but I think they still can be useful.
 Separately, banks create currency for day to day use based on the public’s demand for currency. The more you go to the ATM, the more bills the central bank creates for you to withdraw. Banks return currency to the central bank every so often (either to buy assets the central bank holds, or to replace it with its digital equivalents). If fewer people want cash and ATMs are overprovisioned, banks will deposit more cash with the central bank than they, as a whole, withdraw.
Therefore, while the central bank controls the growth of the money supply, the public collectively determines the growth in the cash supply. While in general the cash supply continues to grow, this may change as more and more commerce becomes digital. Sweden has already reached peak cash and is now seeing their total cash supply decline (without a corresponding decrease in money supply). ^
 That would be to say, that money decreases at or near the peak of a business cycles because of some delayed effect from the previous business cycle, rather than as an independent variable that will affect the current business cycle. ^
 Furthermore, it seems that depressions can be transmitted among countries with a common currency source (e.g. the gold standard, the current international dollar based payment regime), but are less likely transmitted outside of their home regime. China, for example, did not see a contraction during the first part of the Great Depression (it used silver as its monetary base, rather than gold) and only saw a contraction once the US began buying up silver, effectively shrinking the Chinese monetary supply. ^
 Although crucially, they don’t allow instant withdrawals, because they require some time to sell assets. ^
 We aren’t losing anything by making this distinction. The growth of products like credit cards has not affected the monetary transmission mechanism, see Has the Growth of Money Substitutes Hindered Monetary Policy? By Anna J Schwartz and Philip Cagan, 1975. ^
 Financial terms referring to banks are often oddly inverted. Customer deposits with banks are termed liabilities (as the bank is liable to return them), while loans the bank has made are assets (as someone else will hopefully pay the bank back for them). If you want to see which of your friends have been reading about economics, say “I think a lot of the loans that bank made have become liabilities”. The ones who visibly twitch or look confused are the ones studying economics. ^
 In addition to regulation, government policy can affect the deposit-reserve ratio. In the aftermath of the 2007-2008 financial crisis, the Federal Reserve began, for the first time, to pay interest on reserves (both required reserves and excess reserves). This move led to a huge increase in excess reserves (to more than 16x required reserves by 2011; this happened because banks became very risk averse during the crisis and getting interest on their excess reserves became a risk-free way to make money) and a precipitous drop in the deposit-reserve ratio, which, as we discussed above, means a precipitous drop in the supply of money (which tends to lead to recessions and depressions). Scott Sumner calls this one of the greatest ever failures of monetary policy. ^
 In addition to cutting back on loans, this often results in banks selling assets, to try and increase the amount of cash they have on hand. If multiple banks run into trouble at once and they sell similar assets at the same time the value of the assets can drop precipitously, forcing other banks to sell and raising the possibility of multiple bank failures. This is called contagion, a word that came up a lot in the aftermath of the 2007-2008 financial crisis. ^
 “Full employment” is a term economists use to mean “the unemployment rate during neutral macroeconomic conditions”, which is simply the unemployment rate outside of a recession or a speculative bubble. It’s my opinion that full employment is heavily dependent on the political and culture features of a country. Canada and America, for example, have rather different full employment rates (Canada’s allows more unemployment). I’d argue this is because Canada has more of a social safety net, which would imply that some people working in the US at “full employment” really would prefer not to work, but feel they have no other choice. This seems to fit well with empirical data. For example, when the extended unemployment benefits program ended in 2015, we simultaneously saw a drop in the unemployment rate and a decrease in wages. This is consistent with unemployed people suddenly scrambling for jobs at rather worse terms than they’d previously hoped for. ^
 Narrow exceptions apply and normally represent some sort of promotion or implicit sale. For example, short-term car loans on last year’s models will often be discounted below the target rate. It is generally a good idea to take a short-term loan at a below-target interest rate rather than pay a lump. This is not financial advice. ^
 Technically, for an event to qualify as a recession, there must be two quarters of successive contraction in national GDP. This never occurred during (or after) the Dot-com crash. Interestingly, the initial contraction was immediately preceded by the Federal Reserve signalling its intent to tighten monetary policy so as to rein in speculation, which it did by raising the interest rate target three times in quick succession. When markets crashed, it quickly reversed course, which may have played a role in averting a longer recession. ^
 This is another way of saying either “they try and return a deposit-reserve ratio that has become too high to normal” or “they try and shrink their deposit-reserve ratio”. In either case, the money supply is going to shrink. ^
 Banks, as Matt Levine likes to say are “a magical place that transforms risky illiquid long-term loans into safe immediately accessible deposits.” He goes on to point out that “like most magic, this requires a certain suspension of disbelief”. This is pretty socially useful; we want people to trust their bank accounts, but we also want loans for things like houses and factories and college to exist. Most of the time the magic works and everything is fine. But if people stop believing in the magic, it turns out that the guy behind the curtain is a bunch of loans that you can’t call due right away. If you try to, the bank fails. ^
 Remember, this is generally a good thing as it makes bank services much more affordable. If banks held onto all their reserves, banking services would be very expensive and many more disadvantaged people would be unbanked. ^
 Before insurance, only the first people to get to the bank would get their money back. This meant that you had a strong incentive to pull your money out at the very first sign of trouble. Otherwise stable and well-run banks could be undone by a rumour, as everyone panicked and flocked to the withdrawal counter. Deposit insurance changes the game; now no one has to rush to be first, which means no one needs to withdraw at all. ^
 Runaway inflation is bad! But a decrease in the money supply, or a decrease in the growth rate of the money supply is bad as well. A very irresponsible program of monetary growth could trigger double digit inflation. Failure to respond promptly to a decrease in the growth rate of money will cause a recession. Unfortunately, central banks aren’t blamed for recessions (by the government or the general populace) but are blamed for inflation, so they tend to act to minimize their chance of being blamed, instead of acting to maximize social good. ^
 Now in real life (as opposed to this simplified model), people probably don’t immediately spend or invest absolutely every extra dollar they get. They may expect to spend some extra in the near future and want to hold it in cash, or they may want to build up more than of a cushion.
This would be an example of an inelastic relationship, where a change in one variable (money supply) leads to a less than proportional change in another (spending/investment).
Still, the more money that is dumped into the economy, the closer we get to the idealized model. If you win $100 in a lottery, you may just leave it in your bank account. But if you win $1,000,000 you’re going to be spending some of it and investing a lot of the rest. ^
 Remember, it is possible for the central bank to increase interest rates (create less money) without changing the monetary growth rate. If banks are creating a lot of money and the economy is already at capacity, the central bank can sometimes safely cut back on the amount of money it’s creating while still allowing adequate money to be created by banks. This is why central banks often raise interest rates during booms. It can be necessary to keep inflation from rising. ^
I am not the first to wonder if co-ops might be more “recession-proof” than conventional firms. Since co-ops generally operate via profit-sharing, rather than set wages, they may exhibit less downwards nominal wage rigidity (the economic term for people’s aversion to pay cuts), which means they might weather recessions with wage cuts, rather than outright job losses. I haven’t been able to find any studies on this subject, but I’d be very interested to see if they exist. ^
 There is a strain of leftist thought that views Paul Volcker reining in inflation as much worse for workers than any policy of Reagan’s. I’m trying to find a better explanation of this position somewhere and plan to write about it once I do. ^
Many, including me, have relied on Max Weber’s definition of a state as “the rule of men over men based on the means of legitimate, that is allegedly legitimate violence”. I thought that violence was synonymous with power and that the best we could hope for was a legitimate exercise of violence, one that was proportionate and used only as a last resort.
I have a blog post about state monopolies on violence because of Hannah Arendt. Her book Eichmann in Jerusalem: A Report on the Banality of Evil was my re-introduction to moral philosophy. It, more than any other book, has informed this blog. To Arendt, thinking and judging are paramount. It is not so much, to her, that the unexamined life is not worth living. It is instead that the unexamined life exists in a state of mortal peril, separated only by circumstances from becoming one of the “good Germans” who did nothing as their neighbours were murdered.
This blog is my attempt to think and to judge. To take moral positions, so that I am in the habit of it.
It’s a vulnerable spot, to stake out a position. You must always live with the risk of being later proved wrong. Or, perhaps worse, having been proved wrong before you even set pen to paper (or pixels to screen).
In her essay On Violence, Hannah Arendt demolished the premises upon which I based my own essay on how states should use their monopoly on violence. It’s rare that I get to see my own work so completely rendered useless. I found the process both useful and humbling.
On Violence is divided into three sections. In the first, Arendt covers how violence has been used and thought about in the decade preceding her essay (it was published in 1969). In the second, she lays out new definitions and models for strength, violence, power, and authority and challenges the definitions use by the great thinkers of the past. In the final section, she re-examines the recent events of her time in light of her definitions and discusses the promise and danger of power and violence.
So, enter the end of the 1960s. The past decade has seen student sit-ins and protests at practically every university. It has seen the end of official segregation and the ongoing struggles of the civil rights movement. In Europe, a military coup toppled the French Fourth Republic and liberalization in Czechoslovakia led to an invasion by Soviet tanks. In Vietnam, America took up France’s failing war and found themselves unable to defeat a small cadre of revolutionaries.
Against this backdrop, Arendt remarks on the most dangerous fact of all: that through our artifice, we have attained the means (i.e. nuclear weapons) to destroy ourselves. There is, Arendt remarks, an age-old conflict between means and ends, in that means always threaten to overshadow the ends they seek to bring about.
Given that there is always an element of chance when it comes to attaining our ends, nuclear weapons mark the development of a new era, where means dominate ends because all means are so terrifying and all ends so uncertain. When you asked a youth in the 1960s where they hoped to be in the future, they would always preface an answer with “well, assuming I am still alive…”.
None of this was made more comforting by the many commonplace myths Arendt identified. Among the think tanks and the military industrial complex, she saw a tendency to transmute hypotheses into reality, to believe that possibilities identified using only reason (and no evidence) could become universal truths; the people in charge of the nuclear weapons did not believe their ends to be at all uncertain, despite all evidence to the contrary. Among the left, she noticed a glorification of violence that had no place in the texts of Marx (let alone in a movement supposedly built on freedom and compassion). The left, Arendt worried, was imbuing violence with all sorts of properties that it had never had, like ‘creativity’, or ‘the ability to heal’.
It is important to note that Arendt had no time for talk of violent revolutions. To her (as she claims, it was with Marx), “dreams never come true”; violence against an oppressor was just violence, not a transformative force capable of launching a new era. In this, she had the weight of recent bitter history on her side, as the communist revolutions were revealed to have brought about nothing but tyranny.
It is only after laying out this tortured landscape, full of pitfalls and dangers, that Arendt turned to the philosophy of violence, the main purpose of this essay.
The first part of this examination is an observation: philosophers and politicians, from the left to the right, have, for a long time, identified violence as a mere outgrowth or component of power. Arendt trots out a dizzying array of quotes, all as plausible as the Max Weber quote I opened with but coming from the likes of C. Wright Mills, Sartre, Sorrel, Jouvenel, Voltaire, von Clausewitz, Mao Zedong, John Stuart Mill, and Hobbes.
It is against all of these definitions, which equate power with violence (and especially coercive violence that propagates the will of whomever wields it) that Arendt stands. She instead seeks a positive power in the philosophy (seldom actually achieved) of the revolutions of the 1700s (and the earlier ideal of polis life, deeply flawed as it was in practice), which viewed government of “man over man” as no fit way to live. In this framework, she identifies power, as distinct from violence, with “the rules of the game”, the set of socially acceptable actions. If you step outside of these rules, power manifests as social consequences: entreaties to change, glares, angry words, and in the extreme case, shunning
This definition is not non-coercive. To social creates like us, social punishments are real punishments. They may not be violence, but they can still act to change our will; or even to shape what we can will.
What prevents the “rules of the game” from being a tyranny (albeit a tyranny with majority support) of another name is some sort of democracy, some ability for people broadly to gain power and push; the chance to have a hand in writing the rules we all must play by. To use the language of the great revolutions of the 1700s, this is “the consent of the governed”.
If you doubt the existence of power as Arendt defines it, I challenge you to go to some public place and violate its norms. Any sufficient violation of norms should see the public exercise their power on you and will probably force you to stop. It is intensely hard for us humans to go against the will of a group, especially if that group makes it displeasure known. And it rarely even needs to come to anything as overt as glares; power is invisible, until you sense its boundaries. It’s a rare person who can act, knowing that they will immediately face intense social censure for their actions. It’s recognizing this, when so few others have, that marks Arendt’s brilliance.
(Interestingly, if you were to complete this challenge, the norms that you violate would most likely be norms that you otherwise agree with. The rules of the game are supposed to exist to make us feel happy and satisfied, able to interact with each other without fear. Personhood is an interface that carries expectations in order to receive recognition.)
Power will always be less absolute than violence. You obey a criminal with a gun far more readily than you obey the law, because the criminal (or rather, the gun) has an immediacy that power does not possess. Therefore, a law without popular support can be enforced, but only at the barrel of the gun. The violence of the enforcement will overwhelm the power of the majority.
Note the use of majority here, because that word is important in Arendt’s conception; to her, power will always require a majority. From this and from the immediacy of violence, it follows that the only way a minority can enforce their will on a majority is via violence.
Once you conceive of power as “the simple rules of the game”, it is clear how much weaker the tyrant is than the body politic. Tyranny falls apart as soon its few enforcers refuse to wield the weapons necessary for its survival, because there is no back up, nothing else, that can maintain it. Power can survive the complete annihilation of the government, because the government is its mere outgrowth, not its heart.
That said, if we are concerned with the ability of tyrants to rule through violence, we should be fearful of the continual improvements we are making to the implements of violence. It is not, as you might think, simply that the implements have become more destructive. There is as much space between the knight and the peasant with a pitchfork as there is between the man with a rifle and the stealth bomber, which is to say that the tyrant has always outclassed the revolutionary.
The true danger is rather how modern implements of violence allow the tyrant to shrink their inner circle and yet still maintain their monopoly on violence. Automation has made violence more efficient, not yet to the pathological case where one man with a button and an army of robots can hold a whole nation in fear, but there is a sense we are fast approaching that terrifying state.
If tyranny shows how violence can unmake power, it is rebellions that show how power can overshadow violence. Rebellions are successful when the state has lost its grip on power, not when the rebels win on the battlefield. Armed rebellions are often made needless by the very fact of their existence, because rebels can only arm themselves when the gatekeepers of weapons decide they no longer wish to support the state. When the army refuses the demands of the strongman, the regime is already over. Armed rebellions succeed more because they erode the power of the state to the point where no one will back it than as a result of any decisive war of manoeuvres.
There is, of course, room for state violence outside of the extremes. Like in the case of tyranny, Arendt considers state violence to be the opposite of state power. It emerges only when power has failed (e.g. when power alone is not enough to keep a criminal “playing by the rules of the game”) or when power is breaking down (e.g. the police being called on to disperse protestors marching on the government). Because of this, Arendt believes that (democratic) states should not be defined by violence, which is only theirs in exigency.
The interaction between power and violence is a topic Arendt returns to over and over in this section. She also believes, that violence flips power on its head (“the extreme form of power is All against One, the extreme form of violence is One against All”) – and steadily erodes it. I’m not entirely sure what the mechanism is supposed to be here though; it could be that when everyone sees violence as the quickest way to their ends, the structures of power – the incentive to play by the rules of the game in order to change them – disappears. Or it could be that violence leads to violence in return, as everyone tries to protect themselves without being able to resort to power. Regardless, the outcome is the same.
Terror is the result of violence that destroys all power and then fails to abdicate. The Soviet government provides one of the clearest examples of terror. After it shattered society, it seeded it with informants. This meant that no one could seek out others to organize power, because there was always the fear that you might be conspiring with an informant. Russia, I think, is still grappling with this total destruction of all power. It is unclear to me if it is at all capable of returning to rule based on power, rather than (in some part, at least) violence.
Nonviolent resistance movements, like Gandhi’s, work only when the government is scared of the corrosive effects of violence. Sit-ins and salt marches would have been met with massacres if used against the Soviets or Nazis, but against a British government that feared the results of becoming reliant on violence, they were successful.
(The British were right to fear violence. After all, it was soldiers tasked with “pacifying” the colonies that launched the coup d’état that ended the French Fourth Republic. Arendt strongly believed that relying on violence abroad would erode power at home, probably as a result of this experience, not to mention the violence used to quell anti-war demonstrators in America.)
These ideas provide the conceptual framework for Arendt to re-examine what was then recent history and justify why the theorist still has a right to talk about these things.
Arendt pauses to explain that she feels the need to justify her right to speak on these subjects, because of what she claims is an ongoing tendency to explain human behaviour in terms of animal behaviour. Scientists, says Arendt, are increasingly expanding the scope of which behaviours should be considered “natural”, which is to say, the same as other animals would exhibit. Tied into this is a nascent and seldom spoken belief, that reason requires us to sever some of these vestiges of our animal nature.
Arendt disagrees strenuously with both the premise and the prescription. First, she believes that it is wrong to say that we are proved to be more and more like animals. Instead, it is more correct to say that animals are proved to be more and more like us. It is still us that has the singular faculty for reason, but it is certainly amusing and interesting to see all of the ways in which we are not as alone upon our pedestal as we once assumed.
(I think she makes this distinction because if we are like animals, then the study of human nature belongs to the biologist. But if animals are like us, then human nature is still the domain of the philosopher. It’s a subtle difference, but to her, a very important one.)
When it comes to removing human capacity – like for rage – Arendt sees nothing but dehumanization. Rage, she explains, can be rational. We rage when we suspect something could be done but it is not. Rage is turned not against the volcano, but against the heavens for failing to prevent it, or the government for failing to protect us.
(I have been known to view critiques of science like this, from non-scientists, with suspicion. I think Arendt gets a pass because it is clear that her disagreements with science aren’t based on a fear of science disproving one of her specific political positions. Arendt is good at this in general; in an appendix, she cautions against a scientific meritocracy without using any of the tired and silly arguments people normally resort to.)
Rage and violence can also be a rational reaction to hypocrisy (if reason is a trap, why step into it?), although Arendt is quick to point out that this can backfire in two ways (when seeking out hypocrisy becomes an end into itself, as during The Terror; when violence is used to provoke violence and therefore “reveal” a hypocrisy that never existed).
To be honest, I’m not sure many people are arguing that scientists should remove fundamental characteristics of people anymore. But it strikes me as the sort of thing people plausibly could have argued about in the past. And it seemed worth noting that Arendt sees a (limited) role for violence or anger in politics (although it is also worth noting that she views violence per se as outside of the political sphere, because it has nothing to do with power). And finally, I should mention that like practically everyone, she views violence in self-defence as justified.
But Arendt does find many justifications of violence to be foolish. She cautions against “natural” metaphors for power, those that associate it with outward growth and fecundity. Once you accept these, she believes, you also accept that violence has the power of renewal. Violence clears away the bounds on power and breathes new life into it by allowing it to expand again (imagine the analogy to forest fire, which clears away dead wood and lets a new forest grow). Given all of the follies and pains of empire, it is clear that even if this were true (and she is not convinced that it is), it is not recommended. Power, to Arendt, is perfectly content without expansion (and indeed, violent expansion, to her, always erodes power and replaces it with violence).
Nowhere does she find violence more dangerous then with respect to racism. On racist ideologies, she says:
Racism, as distinguished from race, is not a fact of life, but an ideology, and the deeds it leads to are not reflex actions, but deliberate acts based on pseudo-scientific theories. Violence in interracial struggle is always murderous, but it is not “irrational”; it is the logical and rational consequence of racism, by which I do not mean some rather vague prejudices on either side, but an explicit ideological system.
(To make it perfectly clear, she means “rational” here to read only as internal consistency, not external consistency.)
Luckily, power can overcome prejudices. The non-violent actions of the Civil Rights Movement are one of her best examples of the fruits of power, which broke apart segregation and ended (for a time) most restrictions at the ballot box.
That said, even here does Arendt see some role for limited political violence (I am using this to mean what it normally does, but should acknowledge Arendt would view this particular word combination as an oxymoron). She acknowledges that sometimes, it is only through the violence of the radical that the moderate is given a hearing. Unfortunately, beyond cautions that violence is useful only for short-term objectives and that it is indiscriminate in its ends (that is to say, it is a poor tool for systemic change, because it is as likely to gain token concessions as real change), Arendt offers no real framework with which to evaluate when violence might be justified.
Such a framework would be especially useful when evaluating violence against bureaucracy, a major theme of the last section. Arendt identifies bureaucracy as the force with which the student movements are fighting and claims that it is tempting to resort to violence when dealing with it because bureaucracy can leave you with no one to argue with and no avenue through which to gather and use power.
It is because of this that Arendt stands against the “progressive” goal of centralization and instead prefers federalism. This is interesting to me, because Arendt is normally identified as a leftist and her writing quotes Marx heavily. It is a testament to the contempt with which she holds bureaucracy (no doubt heavily influenced by her work analyzing the bureaucracy of the Nazis) that she views striking against it as more important than the progressive priorities that can be attained via centralization and bureaucracy.
Or perhaps it is just that Arendt’s leftist views are actually quite heterodox; there’s a certainly a way to read her that suggests hostility to the welfare state and a preference (perhaps for reasons grounded in a desire to promote virtue and human connection?) for communal charity on a more local scale as a replacement.
Arendt acknowledges that bureaucracy has made the “impossible possible” (e.g. the landings on the moon), but she believes that this has come at the cost of making daily tasks (like governing) impossible.
To this conundrum, she offers no answer. This, I think, is very characteristic of Arendt. It’s very easy to see what she opposes, but hard to find a model of government for which she advocates. I often find her criticism incredibly insightful, so this curious stopping short, her refusal to recommend any specific action, is often frustrating.
As it is, all I’m left with are fears. The trends she laid out – the dangers of our means overshadowing our ends and the ossification that comes with bureaucracy – have not gone away. If anything, they’ve intensified. And while this book gave me a new model of power and violence, I’m not quite sure what to do with it.
But then, Arendt would probably say there’s no point in trying to do something with it alone. Power can only come in groups. And her students are probably supposed to talk with others, to share our concerns, and to think about what we can do together, to keep the world running a little longer.
Degrowth is the political platform that holds our current economic growth as unsustainable and advocates for a radical reduction in our resource consumption. Critically, it rejects that this reduction can occur at the same time as our GDP continues to grow. Degrowth, per its backers, requires an actual contraction of the economy.
The Canadian New Democratic Party came perilously close to being taken over by advocates of degrowth during its last leadership race, which goes to show just how much leftist support the movement has gained since its debut in 2008.
I believe that degrowth is one of the least sensible policies being advocated for by elements of the modern left. This post collects my three main arguments against degrowth in a package that is easy to link to in other online discussions.
All of this is evidence of an economy slowly shifting away from stuff. For an economy to grow as people turn away from stuff, they have to consume something else, for consumers often means services and experiences. Instead of degrowth, I think we should accelerate this process.
It is very possible to have GDP growth while rapidly decarbonizing an economy. This simply looks like people shifting their consumption from things (e.g. cars, big houses) towards experiences (locally sourced dinners, mountain biking their local trails). We can accelerate this switch by “internalizing the externality” that carbon presents, which is a fancy way of saying “imposing a tax on carbon”. Global warming is bad and when we actually make people pay that cost as part of the price tag for what they consume, they switch their consumption habits. Higher gas prices, for example, tend to push consumers away from SUVs.
A responsible decarbonisation push emphasises and supports growth in local service industries to make up for the loss of jobs in manufacturing and resource extraction. There’s a lot going for these jobs too; many of them give much more autonomy than manufacturing jobs (a strong determinant of job satisfaction) and they are, by their nature, rooted in local communities and hard to outsource.
(There are, of course, also many new jobs in clean energy that a decarbonizing and de-intensifying economy will create).
If, instead of pushing the economy towards a shift in how money is spent, you are pushing for an overall reduction in GDP, you are advocating for a decrease in industrial production without replacing it with anything. This is code for “decreasing standards of living”, or more succinctly, “a recession”. That is, after all, what we call a period of falling GDP.
This, I think is the biggest problem with advocating degrowth. Voters are liable to punish governments even for recessions that aren’t their fault. If a government deliberately causes a recession, the backlash will be fierce. It seems likely there is no way to continue the process of degrowth by democratic means once it is started.
This leaves two bad options: give over the reins of power to a government that will be reflexively committed to opposing environmentalists, or seize power by force. I hope that it is clear that both of these outcomes to a degrowth agenda would be disastrous.
Advocates of degrowth call my suggestions unrealistic, or outside of historical patterns. But this is clearly not the case; I’ve cited extensive historical data that shows an ongoing trend towards decarbonisation and de-intensification, both in North America and around the world. What is more unrealistic: to believe that the government can intensify an existing trend, or to believe that a government could be elected on a platform of triggering a recession? If anyone is guilty of pie-in-the-sky thinking here, it is not me.
Degrowth steals activist energy from sensible, effective policy positions (like a tax on carbon) that are politically attainable and likely to lead to a prosperous economy. Degrowth, as a policy, is especially easy for conservatives to dismiss and unwittingly aids them in their attempts to create a false dichotomy between environmental protection and a thriving economy.
It’s for these three reasons (the possibility of building thriving low carbon economies, the democratic problem, and the false dichotomy degrowth sets up) that I believe reasonable people have a strong responsibility to argue against degrowth, whenever it is advocated.
(For a positive alternative to degrowth, I personally recommend ecomoderism, but there are several good alternatives.)
The modern field of linguistics dates from 1786, when Sir Willian Jones, a British judge sent to India to learn Sanskrit and serve on the colonial Supreme Court, realized just how similar Sanskrit was to Persian, Latin, Greek, Celtic, Gothic, and English (yes, he really spoke all of those). He concluded that the similarities in grammar were too close to be the result of chance. The only reasonable explanation, he claimed, was the descent of these languages from some ancient progenitor.
This ancestor language is now awkwardly known as Proto-Indo-European (PIE). It and the people who spoke it are the subject of David Anthony’s book The Horse The Wheel And Language . I picked up the book hoping to learn a bit about really ancient history. I ended up learning some of that, but this is more a book about linguistics and archeology than about history.
Proto-Indo-European speakers produced no written works, so almost all of their specific history is lost. The oldest products of their daughter languages – like the Rig Veda – date from well after the last speakers of the original language passed away.
Instead of the history that is largely barred to us, this book is really Professor David Anthony attempting to figure out who these speakers were and what their lives looked like, without the benefit of any written words. He does this via two channels: their language, and the physical remains of their culture.
Unfortunately, there is at least one glaring problem with each approach. Their language is thoroughly dead and there was (at the time of writing) no scholarly consensus on where they originated.
Professor Anthony is undaunted by these problems. It turns out that we can reconstruct their language and from that reconstruction, determine where they most likely lived. If both approaches are done properly, it should be possible to see archeological details reflected in their language and details of their language reflected in their remains.
The first problem to solve then is the reconstruction of PIE. How does one do this?
Well it turns out that all languages change in similar ways. The way we pronounce consonants often shift, with hard sounds sometimes changing into soft sounds, but very rarely the reverse. How we say words also changes. Assimilation occurs because we tend to omit difficult to pronounce or inconvenient middle syllables (this has led to the invention of contractions in English) and addition happens because we add syllables in the middle of difficult tongue movements (compare the “proper” and colloquial ways of pronouncing the word “nuclear” or the difference between the French athlète and the English athlete).
It would be very odd for an additional syllable to be added in an area where tongue movements aren’t particularly hard, or a syllable to be removed from a word that is typically enunciated. Above all, these changes are regular because they rely on predictable laziness.
Changes tend to happen to many words at once. When people began to hear the Proto-French tsentum (root of cent, the French word for 100) as different from the Latin kentum, they had to make a decision about how exactly it would be pronounced. They chose a soft-c, a sound Latin lacks, but that is easier to say. This change got carried over to every ts-, c-, or k-, that had previously made the same sound as kentum/tsentum, except those before a back vowel (like “o”), presumably because a soft sound there is actually harder to say .
There’s one final type of change that Anthony mentions: analogy. This is where a grammatical rule used in a single place (e.g. pluralization with -s or -es) is expanded to encompass many more words or cases (most English nouns were originally pluralized with other suffixes, or with stem changes like “geese”; it was only later that people decided -s and -es would be the general markers of plural nouns).
If you have a large sample of languages descended from a historical language (and with Proto-Indo-European, there really is no lack), you can follow a bunch of words backwards through likely changes and see if they all end up in the same place.
If you do this for the modern words for “hundred” from many PIE daughter languages, you’re left with *km’tom (an asterisk is used before sounds where there is no direct evidence). All words for hundred in modern descendants (as well as dead ancient descendants that we know how to speak) of Proto-Indo-European can be derived from *km’tom using only well-attested to and empirically observed rules of language change.
(I occasionally got chills reading reconstructed words. It’s amazing how some words that our distant ancestors spoke thousands upon thousands of years ago are fairly well preserved in our modern speech.)
This is pretty cool, because it allows us to start seeing which words were common enough in Proto-Indo-European to be passed down to all daughters and which words were borrowed in.
With a reconstructed vocabulary of about 1,500 words, we can figure out some things that were important to Proto-Indo-Europeans. They seem to have words for relatives on the male side, but not the female side. This suggests that after marriage, the wife moved in with the groom. Less domestically, they seemed to have a word for cattle rustling, suggesting that they weren’t unfamiliar with increasing their wealth at the expense of their neighbours’.
That’s not all we can get from their words. Linguists also believe that Proto-Indo-Europeans had chiefs, who in turn had patrons. They worshipped a male sky deity and sacrificed horses and cattle to him. They formed warrior bands. They avoided speaking the name of the bear. They drove, or knew of, wagons. And they had two words that we could translate as sacred, “that which is forbidden” and “that which is imbued with holiness”.
(There are many more minor cultural touchstones scattered throughout the book. I don’t want to spoil them all.)
We also know the animals and plants they had words for. Reconstructed PIE has words for temperate trees, horses and cows, bees and honey.
These give us clues to where they lived, in the same way that knowing the words “shinney”, “hockey”, “Zamboni” and “creek” are spoken somewhere might help you make a guess as to where that somewhere is.
And while these words help us rule out the Mediterranean and the deserts, they don’t give us much in the way of a specific location without a when, which requires two different methods.
First, we can figure out the approximate death of Proto-Indo-European, the approximate century or millennium when it was entirely splintered into its daughters, by using what linguists have discovered about the rate of language change.
While most vocabulary changes rather quickly, making this a poor tool for dating very old languages, there are a group of words, the core vocabulary, that change much more slowly. The core vocabulary of any language is only a couple hundred words, but they’re some of the most important ones. Normally, core vocabulary includes the words for: body parts, small numbers, close relatives, a few basic needs, a couple of natural features or domesticated animals, some pronouns, and some conjunctions.
English, a prolific borrower, has borrowed 50% of its total vocabulary from the romance languages. It’s core vocabulary, however, is largely free of this borrowing, with only 4% of core vocabulary words borrowed from romance languages.
Core vocabulary changes by about 14-19% every thousand years depending on the language. It’s also known that once two dialects differ by more than 10% of their core vocabulary, they are more properly thought of as separate languages.
Here’s where written language comes in handy. By comparing written inscriptions with known creation dates in different daughter languages, we can make a guess as to when the languages diverged.
The oldest inscriptions in a PIE-derived language are in the Anatolian languages (which were spoken in what is now Turkey). However, Anthony chooses not to use these, because they entirely lack many grammatical innovations that are otherwise common in daughter languages. This leads him to believe that they split away much earlier than other daughters. The presence of later shared innovations means that at the time of the Anatolian split, Proto-Indo-European was probably still a living language and still evolving.
Better candidates are archaic Greek and Old-Indic, both of which have inscriptions dated to around 1,450 BCE. By comparing the differences in wording and grammar between these two and using known rates of change, Anthony dates the end of Proto-Indo-European at around 2,500 BCE. This means that after 2,500 BCE, it doesn’t make sense to speak of a single unified Proto-Indo-European language.
Second is the birth date, the other half of the critical window. To find it, Anthony looks for words that have a known date of invention, specifically “wool” and “wagon”. Getting broadly useful amounts of wool from sheep wasn’t possible until a mutation made sheep coats much larger. We know roughly when this mutation occurred, because sheep suddenly became a larger portion of herds around 3,500 BCE, displacing goats (which produce more milk). The only reasonably explanation for this event is the advent of wool producing sheep, which were very valuable as a source of clothes.
Similarly, wagons have left physical evidence (both directly and in preserved images) and that evidence has been carbon dated to 3,500 BCE .
Since all Proto-Indo-European languages outside of the Anatolian branch have related words for both “wagon” and “wool” that show no evidence of borrowing from other languages, it seems reasonable to conclude that some form of the language existed when wagons and wool first began to reshape the pre-historic world. That means the language had to exist by 3,500 BCE.
There is, I should note, one competing theory that Anthony outlines, in which PIE and Indo-Hittite languages split around 7,500 BCE. This theory requires several unlikely things to happen however; it requires the word for wagon to evolve from the same verb meaning “to turn” in both branches (five similar verbs existed), it requires the PIE speaking people to disperse over all of Europe and become the dominant culture then (this would have been very hard pre-horse domestication, when material cultures were small and language territories tended to be much smaller than modern countries), and all of this would have to happen while material cultures were becoming very different but languages (supposedly) weren’t evolving.
Anthony doesn’t give this theory much credence.
With a rough time-range, we can begin looking for our Proto-Indo-Europeans in space. Anthony does this by looking for evidence of very old loan words. He finds a set coming from Uralic, which also has a bevy of very old loanwords from PIE .
Uralic (appropriately) probably first emerged somewhere near the Ural Mountains. This corresponds well with our other evidence because the area around the Urals (where borrowing could have taken place) is temperate and home to the flora and fauna words we know exist in PIE.
The PIE word for honey, *médhu (note its similarity with the English word for a fermented honey drink, “mead” ), is particularly useful here. We know that bees weren’t common in Siberia during the time when we suspect PIE was being spoken (and where they were common, the people weren’t herders), but that bees were common on the other side of the Urals.
Laying it all out, we see that PIE speakers were herders (there’s an expansive set of words relating to the tasks herders must accomplish), who lived near the Urals but not in Siberia. The best archeological match for these criteria is a set of herder people who lived in what is now modern-day Ukraine and it is these people that Anthony identifies as the Proto-Indo-Europeans.
If this feels at all dry, I want to assure you that it wasn’t when I read it. I felt that the first section of the book was the strongest. Anthony provides an excellent overview of linguistics, archeology, and some of the crazy stuff he’s had to invent to help him in his studies.
For example, he believes that horses were ridden much earlier than was commonly thought, perhaps around or before 3,500 BCE. To prove this, him and his wife embarked on a study of how bits wear teeth in horses’ mouths, which culminated in empirical studies with a variety of bit types (including rope) done on live horses that had never been previously given bits, assessed using electron microscopy. The whole thing is a bit bonkers, but it has resulted in a validated test that allows archeologists to determine if a given horse was ever ridden, as well as vindication for Anthony’s chronology of domestication.
Unfortunately, a lot of the rest of the book was genuinely dry. There was a dizzying array of cultures inhabiting the Eurasian steppes in the period Anthony covers, each with their own house type, pottery type, antecedents, and descendants. Anthony goes through these in excruciating detail. It’s the sort of thing that other archeologists love him for – a lot of these cultures are very poorly described outside of Russian language publications – but it’s hard for a lay-person to follow. I may have pulled it off if I built a giant flow chart, but as it was, I mostly felt overwhelmed.
(Anthony has to go through them all to explain how PIE-derived languages ended up everywhere we know them to have. People of Europe don’t speak PIE-derived languages just because of Latin. Many people the Romans conquered spoke languages that were distantly related to the invader’s tongue. Those languages need to be accounted for in any theory about Proto-Indo-Europeans.)
This is disappointing, because the history started off so engagingly. Anthony outlines how the earliest ancestors of the Proto-Indo-Europeans had persistent cultural frontiers with hunter-gatherers on the Urals on one side and the farmers in the Bug-Dniester valley on the other.
The herding and farming economies required a moral shift from previous hunter-gatherer practices, one that would see agriculturalists harden their hearts to their own children starving, if the only thing that could assuage their hunger was their last few breeding pairs or their seed grain. This is the first time I saw someone lay out the moral transformation necessary to accept agricultural and having it laid out so starkly made it much easier to understand why not every pre-historic group was willing to adopt it.
(I had always thought the biggest moral change was accepting accumulation of wealth, but this one is, I think, more important.)
This is not to say that the herders and farmers were exactly alike; their different ways of life meant they were culturally distinct. In addition to their dwellings and material culture, they differed in funeral customs and probably in religion. Everything we know about early-PIE speakers suggest that they worshipped a sky god of some sort. The farmers who lived next door decorated their houses with female figurines, figures that never show up in any excavation of herder camps or grave sites.
I was also shocked at the amount of long distance trade and the wealth acquisition that was going on 6,000 years ago. There are kurgans (circular rock topped graves) with grave goods from Mesopotamia dating from that long ago, as well as one kurgan where someone was buried with almost 4 kilograms of gold ornamentation.
The herders and farmers didn’t live next door in harmony forever. Changes to their stable arrangement happened as a result of one of the Earth’s period historical climate fluctuations (which caused a collapse among many of the farmers and may have led to more raiding from the early-PIE speaking herders) and later the adoption of horse-riding (which made raiding easier) and wagons (which allowed herders to bring water with them and opened the inner steppes up to grazing).
Larger herds and changing boundaries led to clashes among the herders (we’ve found kurgans where the bodies bear marks of violent deaths) and to raids on agriculturalists (we’ve found burned villages peppered with arrows), although interestingly, never the farmers directly adjacent to the steppes. It may be that the herders didn’t want to disrupt their trading relationships with their neighbours and so were careful to raid dozens of kilometers away from their own borders (a task made easier with horses).
The farmers were no pushovers; some of their towns held up to 10,000 people by the third millennium BCE. These towns were bigger than the cities of Mesopotamia, but lacked the civic organizational features of the true cities of the Fertile Crescent.
And it was at about this point in the narrative where the number of cultures proliferated beyond my ability to follow and I began writing down interesting facts rather than keeping track of the grand narrative.
Here are a few that I liked the most:
About 20% of corpses in warrior graves (those with weapons and other symbols of membership in warrior society) whose gender is known are female. This matches the percentage in much later steppe graves. As Kameron Hurley said, women have always fought.
Contrary to popular stereotypes, the cultures of the Eurasia steppes weren’t reliant on cities for manufactured goods. They had their own potters and metalsmiths and they made many mining camps. In fact, by the 2000s BCE, it seems that Mesopotamian cities were dependent metal mined on the steppes,
In the early Bronze Age, tin was worth its weight in silver. When tin wasn’t available, bronze was made with arsenic.
Horses were probably domesticated because they winter better than the other animals that were available in Eurasia at the time. Cows will starve to death if grass is hidden by snow, while sheep and goats use their nose to move snow off of grass (which means that they’re helpless once it’s covered in ice). Sheep, cows, and goats are all unable to drink water that is covered in ice. Horses break ice and move snow with their hooves, making winter no real inconvenience to them. Mixing horses with cows can allow cows to eat the grass that horses uncover.
Disaffected farmers may have been attracted to the herding economy because wealth was much easier to build up. Farmland is hard to acquire more of without angering your neighbours, but herds given good pasture will naturally grow exponentially. A lot of the spread of the herding economy into Europe probably used some sort of franchise system, where locals joined the PIE culture and were given some animals, in exchange for providing protection and labour to their patron.
I’ve struggled through a lot of books that are clearly meant for people more knowledgeable in the subject than I am. It might just be a function of how interested I am in archeology (that is to say: only tolerably interested) that this is the first of them that I wish had an abridged edition. If you aren’t deeply interested in archaeology or pre-history, there’s a lot of this book that you’ll probably end up skimming.
The rest of it makes up for that. But I think there would be market for Anthony to write another leaner volume, meant for a more general audience.
If he ever does, I’ll probably give it a read.
 David Anthony is very sensitive to the political ends that some scholars of Proto-Indo-European have turned to. He acknowledges that white supremacists appropriated the self-designation of “Aryan” used by some later speakers of PIE-derived languages and used it to refer to some sort of ancient master race. Professor Anthony does not buy into this one bit. He points out that Aryan was always a cultural term, not a racial one (showing the historical ignorance of the racists) and he is careful to avoid assigning any special moral or mythical virtue to the Proto-Indo-Europeans whose culture he studies.
White supremacists will find nothing to like about this book, unless they engage in a deliberate misreading. ^
 This is why the French côte is still similar to the Latin costa. ^
 Anthony identifies improvements in carbon dating, especially improvements in how we calibrate for diets high in fish (which contain older carbon, leading to incorrect ages) as a major factor in his ability to untangle the story of the Proto-Indo-Europeans. ^
 Uralic is the language family that in modern times includes Finnish and some languages spoken in Russia. ^
 While looking up the word *médhu, I found out that it is also likely the root of the Old Chinese word for honey, via an extinct Proto-Indo-European language, Tocharian. The speakers of Tocharian migrated from the Proto-Indo-European homeland to Xinjiang, in what is now China, which is likely where the borrowing took place. ^
Richard Nixon would likely have gone down in history as one of America’s greatest presidents, if not for Watergate.
To my mind, his greatest successes were détente with China and the end of the convertibility of dollars into gold, but he also deserves kudos for ending the war in Vietnam, continuing the process of desegregation, establishing the EPA, and signing the anti-ballistic missile treaty.
Nixon was willing to try unconventional solutions and shake things up. He wasn’t satisfied with leaving things as they were. This is, in some sense, a violation of political norms.
When talking about political norms, it’s important to separate them into their two constituent parts.
First, there are the norms of policy. These are the standard terms of the debate. In some countries, they may look like a (semi-)durable centrist consensus. In others they may require accepting single-party rule as a given.
I believe that the first set of political norms are somewhat less important than the second. The terms of the debate can be wrong, or stuck in a local maximum, such that no simple tinkering can improve the situation. Having someone willing to change the terms of the debate and try out bold new ideas can be good.
On the other hand, it is rarely good to overturn existing norms of political behaviour. Many of them came about only through decades of careful struggle, as heroic activists have sought to place reasonable constraints on the behaviour of the powerful, lest they rule as tyrants or pillage as oligarchs.
The Nixon problem, as I’ve taken to describing it, is that it’s very, very hard to find a politician who can shake up the political debate without at the same time shaking up our much more important political norms.
Nixon didn’t have to cheat his way to re-election. He won the popular vote by the highest absolute margin ever, some 18 million votes. He carried 49 out of 50 states, losing only Massachusetts.
Now it is true that Nixon used dirty tricks to face McGovern instead of Muskie and perhaps his re-election fight would have been harder against Muskie.
Still, given Muskie’s campaign was so easily derailed by the letter Nixon’s “ratfuckers” forged, it’s unclear how well he would have done in the general election.
And if Muskie was the biggest threat to Nixon, there was no need to bug Watergate after his candidacy had been destroyed. Yet Nixon and his team still ordered this done.
I don’t think it’s possible to get the Nixon who was able to negotiate with China without the Nixon who violated political norms for no reason at all. They were part and parcel with an overriding belief that he knew better than everyone else and that all that mattered was power for himself. Regardless, it is clear from Watergate that his ability to think outside of the current consensus was not something he could just turn off. Nixon is not alone in this.
This, I think, is the biggest mistake people like Peter Thiel made when backing Trump. They saw a lot of problems in Washington and correctly concluded that no one who was steeped in the ways of Washington would correct them. They decided that the only way forward was to find someone brash, who wouldn’t care about how things were normally done.
But they didn’t stop and think how far that attitude would extend.
Whenever someone tells you that a bold outsider is just what a system needs, remember that a Nixon who never did Watergate couldn’t have gone to China. If you back a new Nixon, you better be willing for a reprise.
I was reading a post-modernist critique of capitalist realism – the resignation to capitalism as the only practical way to organize a society, arising out of the failure of the Soviet Union – and I was struck by something interesting about post-modernism.
Insofar as post-modernism stands for anything, it is a critique of ideology. Post-modernism holds that there is no privileged lens with which to view the world; that even empiricism is suspect, because it too has a tendency to reproduce and reify the power structures in which in exists.
A startling thing then, is the sterility of the post-modernist political landscape. It is difficult to imagine a post-modernist who did not vote for Bernie Sanders or Jill Stein. Post-modernism is solely a creature of the left and specifically that part of the left that rejects the centrist compromise beloved of the incrementalist or market left.
There is a fundamental conflict between post-modernism’s self-proclaimed positioning as an ideology without an ideology – the only ideology conscious of its own construction – and its lack of political diversity.
Most other ideologies are tolerant of political divergence. Empiricists are found in practically every political party (with the exception, normally, being those controlled by populists) because empiricism comes with few built in moral commitments and politics is as much about what should be as what is. Devout Catholics also find themselves split among political parties, as they balance the social justice and social order messages of their religion. You will even, I would bet, find more evangelicals in the Democratic party than you will find post-modernists in the Republican party (although perhaps this would just be an artifact of their relative population sizes).
Even neoliberals and economists, the favourite target of post-modernists, find their beliefs cash out to a variety of political positions, from anarcho-capitalism or left-libertarianism to main-street republicanism.
It is hard to square the narrowness of post-modernism’s political commitments with its anti-ideological intellectual commitments. Post-modernism positions itself in communion with the Real, that which “any [constructed, as through empiricism] ‘reality’ must suppress”. Yet the political commitments it makes require us to believe that the Real is in harmony with very few political positions.
If this were the actual position of post-modernism, then it would be vulnerable to a post-modernist critique. Why should a narrow group of relatively privileged academics in relatively privileged societies have a monopoly on the correct means of political organization? Certainly, if economics professors banded together to claim they had discovered the only means of political organization and the only allowable set of political beliefs, post-modernists would be ready with that spiel. Why then, should they be exempt?
If post-modernism instead does not believe it has found a deeper Real, then it must grapple with its narrow political attractions. Why should we view it as anything but a justification for a certain set of policy proposals, popular among its members but not necessarily elsewhere?
I believe there is value in understanding that knowledge is socially constructed, but I think post-modernism, by denying any underlying physical reality (in favour of a metaphysical Real) removes itself from any sort of feedback loop that could check its own impulses (contrast: empiricism). And so, things that are merely fashionable among its adherents become de facto part of its ideology. This is troubling, because the very virtue of post-modernism is supposed to be its ability to introspect and examine the construction of ideology.
This paucity of political diversity makes me inherently skeptical of any post-modernist identified Real. Absent significant political diversity within the ideological movement, it’s impossible to separate an intellectually constructed Real from a set of political beliefs popular among liberal college professors.
And “liberal college professors like it” just isn’t a real political argument.
The fundamental problem of governance is the misalignment between means and ends. In all practically achievable government systems, the process of acquiring and maintaining power requires different skills than the exercise of power. The core criteria of any good system of government, therefore, must be selecting people by a metric that bears some resemblance to governing, or perhaps more importantly, having a metric that actively filters out people who are not suited to govern.
When the difference between means and ends becomes extreme, achieving power serves only to demonstrate unsuitability for holding it. Such systems are inevitably doomed to collapse.
Many people (I am thinking most notably of neo-reactionaries) put too much stock in the incentives or institutions of government systems. Neo-reactionaries look at the institutions of monarchies and claim they lead to stability, because monarchs have a large personal incentive to improve their kingdom and their lifetime tenure should afford them a long time horizon.
In practice, however, monarchies are rather unstable. This is because monarchs are chosen by accident of birth and may have little affinity for the patient business of building a nation. In addition, to maintain power, monarchs must be responsive to the aristocracy. This encourages the well documented disdain for the peasantry that was common in monarchical governments.
Monarchy, like many other systems of government, was not doomed so much by its institutions, as by its process for choosing a leader. The character of leaders is the destiny of nations and many forms of government have no way of picking people with a character conducive to governing well.
By observing the pathologies of failed systems of government, it becomes possible to understand why democracy is a uniquely successful form of government, as well as the risks that emergent social technologies pose to democracy.
“Lenin’s core of original Bolsheviks… were many of them highly educated people…and they preserved these elements even as they murdered and lied and tortured and terrorised. They were social scientists who thought principle required them to behave like gangsters. But their successors… were not the most selfless people in Soviet society, or the most principled, or the most scrupulous. They were the most ambitious, the most domineering, the most manipulative, the most greedy, the most sycophantic.” – Francis Spufford, Red Plenty
The revolution that created the USSR was one founded on high minded ideals. The revolutionaries were going to create a new society, one that was fair, equal, and perfect; a utopia on earth. Yet, the bloody business of carving out a new state often stood in stark contrast to these ideals – as is common in revolutions.
Seizing power in a revolution requires a grasp of military tactics and organization; the ability to build a parallel state apparatus in occupied areas; the ability to inspire people to fight for your side; and a grasp of propaganda. While there is overlap with the skills necessary for civilian rule here, the perspective of a rebel is particularly poorly suited to governing according to the rule of law.
It is hard to win a revolution without coming to believe on some fundamental level that might makes right. The 20th century is littered with examples of rebels who cannot put aside this perspective shift when they transition to civilian rule.
(This, incidentally, is why nonviolent resistance leads to more stable governments and why repressive governments are so scared of it. A successful non-violent revolution leaves much less room for the dictator’s eventual return.)
It was so with the Soviets. Might makes right – perhaps more so even than communism – was the founding ideal of the Soviet Union.
Stalin succeeded Lenin as the leader of the Soviet Union via political manoeuvering, backstabbing, and the destruction of his enemies, tactics that would become key in future transfers of power.
To grasp the reins of the Soviet Union, it became necessary to view people as tools; to bribe key constituencies, to control the secret police, and to placate the army.
And this set of tools is not well suited to governing a prosperous nation. Attempts to reform the USSR with shadow prices, perhaps the only thing that could have saved communism, failed because shadow prices represented a loss of central control. If prices were not set politically, it would be impossible to manipulate them to reward compatriots and guarantee stability.
It’s true that its combination economic system and ambitions doomed the Soviet Union right from the start. It could not afford to be a global superpower while constrained by an economic philosophy that sharply limited its growth and guaranteed frequent shortages. But both of these were, in theory, mutable. It was only with such an ossifying process for choosing leaders that the Soviet Union was destined for failure.
In the USSR, legitimacy didn’t come from the people, but from the party apparatus. Bold changes, of the sort necessary to rescue the Soviet economy were unthinkable because they cut against too many entrenched interested. The army budget could not be decreased because the leader needed to maintain control of the army. The economic system couldn’t be changed because of how tightly the elite were tied to it.
The USSR needed bold, pioneering leaders who were willing to take risks and shake up the system. But the system guaranteed that those leaders would never rule. And so, eventually, the USSR fell.
“The difference between a democracy and a dictatorship is that in a democracy you vote first and take orders later; in a dictatorship you don’t have to waste your time voting.” – Charles Bukowski
Military dictatorships that fall all fall in the same way: with an increasingly isolated junta issuing orders that are ignored by increasingly large swathes of the populace. The act of rising to the top of a military inculcates a belief that victory can always be achieved by finding the right set of orders. This is the mindset that military dictators bring to governing and it always leads to disaster. Whatever virtues of organization or delegation generals learn, it is never enough to overcome this central flaw.
Governing a modern state requires flexibility. There are always many constituencies: business owners, workers, teachers, doctors. There are often many regions, each with different economic needs. To support resource extraction can harm manufacturing – and vice versa. Bureaucrats have their own pet projects, their own red lines, and their own ideas.
This environment is about as different as it’s possible to be from an army. The military tells soldiers to follow orders. Civilians are rather worse at this task.
Expecting a whole society to follow orders, to put their own good aside for someone else’s plan is folly. Enough people will always buck orders to make a mockery of any grand design.
It is for this reason that military governments are so easy to satirize. Watching career soldiers try and herd cats can be darkly amusing, although the humour is quickly lost if one dwells too long on the atrocities military governments turn to when thwarted.
After all, the flip side of discipline is punishment. Failing to obey orders in the military is normally a crime, whereas failing to obey orders in the civil service is often par for the course. When these two mindsets collide, a junta is likely to impose harsh punishments on anyone disobeying. This doesn’t spring naturally from their position as dictators – most juntas start out with stunning idealistic beliefs about national salvation – but does spring naturally from military regulations. And so again we see a case where it is the background of the leaders, not the structure of the dictatorship that leads to the worst excesses.
You can replace the leaders as often as you like or tweak the laws, but as long as you keep appointing generals to rule, you will find they expect orders to be obeyed unquestioningly and respond harshly to any perceived disloyalty.
There is one last great vice of military dictatorships: a tendency to paper over domestic discontent with foreign wars. Military dictators know that revanchist wars can create popular support, so foreign adventuring is often their response when their legitimacy begins to crumble.
Off the top of my head, I can think of two wars started by military dictatorships seeking to improve their standing (the Falkland War and Six-Day War). No doubt a proper survey would turn up many others.
Since the time of Plato, soldier-rulers have been held up as the ideal heads of state. It is perhaps time to abandon this notion.
“Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.” – Winston Churchill to the House of Commons
To gain power in a democracy, a politician needs to win election. This normally requires some skill in oratory and debate, the ability to delegate to competent subordinates, the ability to come up with a plan and clearly articulate how it will improve people’s lives, possibly some past experience governing that paints a flattering picture, and above all a good reputation with enough people to win an election. This oft-maligned “popularity contest” is actually democracy’s secret weapon.
Democracy is principally useful as a form of government that is resistant to corruption. Corruption is the act of arrogating state power to take benefits for yourself or give them to your friends. Persistent and widespread corruption is one of the biggest impediments to growth worldwide, so any technology (and government system are a type of cultural technology) that reduces corruption is a powerful force for human flourishing.
It is the requirement for a good reputation that helps democracy stand against corruption. In any society where corruption is scorned, democracy ensures that no one who is visibly corrupt can grasp power; if corruption is sufficient to ruin a reputation, no one who is corrupt can win a “popularity contest”.
(It is also worth noting that the demand for a sterling reputation rules out people who have tortured dissidents or ordered protestors shot. As long as autocrats are not revered, democracy can protect against many forms of repression.)
There are three main ways the democracy can fail to live up to its promise. First, it can fail because corruption isn’t appropriately sanctioned. If corruption becomes just the way things are done and scandals stop sticking, then democracy becomes much weaker as a check on corruption.
Second, democracy can be hijacked by individuals whose only skill is self-promotion. In a functioning democracy, the electorate demands that political resumes include real achievements. When this breaks down, democracy becomes a contest: who can disseminate their fake or exaggerated resume the furthest.
It is from this perspective that 24/7 news and social media present a threat to democracy. Donald Trump is an excellent example of this failure mode. He made use of viral lies and controversial statements to ensure that he was in front of as many voters as possible. His largely fake reputation for business acumen was enough to win over a few others.
There are many constituencies in all societies. Demonstrably, President Trump is not popular in America, but he appealed to enough people that he was able to build up a solid voting block in the primaries.
Beyond the primaries Trump demonstrated the third vulnerability of democracies: partisanship. Any democracy where partisanship becomes a key factor in elections is in grave danger. Normally, the reputational component of democracy selects for people with a resume of past successes (an excellent predictor of future successes) while elections with significant numbers of undecided voters provide an advantage to people who run tight campaigns – people who are good at nurturing talent and delegating (an excellent skill for governing).
Partisanship short-circuits this process and selects for whoever can whip up partisan crowds most successfully. This is a rather different sort of person! Rabid partisans spurn compromise and ignore everyone outside of their core constituency because those are the tactics that have rewarded them in the past.
Trump was able to win in part because such a large cross-section of the American electorate was willing to look beyond his flaws if it meant that someone from the other party didn’t win.
A large block of swing voters who look critically at politicians’ reputations and refuse to accept iconoclasts is an important safety valve in any democracy.
This model of democracy neatly explains why it isn’t universally successful. In societies with several strong tribal or religious identities, democracy results in cronyism dominated by the largest tribe/denomination, because it selects for whomever can promise the most to this large block. In countries that don’t have adequate cultural safeguards against corruption, corruption does not ruin reputations and democracy does nothing to squash it.
Democracy isn’t a panacea, but in the right cultural circumstances it is superior to any other realistic form of government.
Unfortunately, we can see that democracy is under attack on two fronts in Western nations. First, social media encourages shallow engagement and makes it easy for people to build constituencies around controversial statements. Second, partisanship is deepening in many societies.
I don’t know what specific remedies exist for these trends, but they strike me as two of the most important to reverse if we wish our democratic institutions to continue to provide good government.
If we cannot find a way to fix partisanship and self-promotion within our current system, then the most important political reform we can undertake is to find a system of government that can pick leaders with the right character for governing even under these very difficult circumstances.
[Epistemic status: much more theoretical than most of my writing. To avoid endless digressions, I don’t justify my centrist axioms very often. I’m happy to further discuss anything that strikes anyone as light on evidence in the comments.]
As far as I can tell, the current shambles arise from three departures from the core of the Westminster system.
First, we have parliament taking control of the business of parliament in order to hold a set of indicative votes. I don’t have the sort of deep knowledge of British history that is necessary to assess whether this is unprecedented or not, but it is certainly unusual.
The majority in the house that controls the business of the house is, kind of definitionally, the government in a Westminster system. Unlike the American Republican system of government, the Brits don’t really have a notion of “the government” that extends beyond whomever can command the confidence of parliament. To have parliament in some sense (although not the formal one) withdraw that confidence, without forcing a new government to be appointed by the Queen or fresh elections is deeply unusual.
The whole point of the Westminster system is to always have a governing majority for key votes. If that breaks down, then either a new governing majority should arise, or new elections. Otherwise, you can have American-style gridlock.
This odd situation has arisen partially from the Fixed-term Parliaments Act of 2011, which severely limited the circumstances under which a sitting government can fall. Previously, all important legislation doubled as motions of confidence; defeat of any bill as strongly championed by the government as Teresa May’s Brexit bill would have resulted in new elections. Now, a motion of no-confidence (which requires a majority to amend a bill to add it, or for the government to schedule a motion of no confidence in itself) must pass, or 2/3 of the house must vote for an early election. This bar is considerably higher (as no government wants to go to the polls as a result of a no confidence motion), so it is much easier for a government to limp along, even when it lacks a working majority in the House of Commons.
It’s currently not clear what does have a working majority in parliament, although I suppose today’s indicative votes (where MPs will vote on a variety of Brexit proposals) will give us an idea.
Unfortunately, even if there’s a clear outcome from the indicative votes (and there’s no guarantee of that), there’s not a mechanism for enacting that. Either parliament will have to keep passing amendments every single day to take control of business from the government (which is supposed to be the entity setting business!), or the government has to buy into the outcome. If neither of those happen, the indicative votes will do nothing but encourage intransigence of those who know they have the support of many other MPs. If the rebels went to the Queen and asked to appoint a new government, this would obviously not be an issue, but MPs seem uninterested in taking that (arguably proper) step.
This all stems from the second problem, namely, that parliament is rubbish when constrained by external forces.
The way that parliament normally works is: people come up with a platform and try and get elected on it. If a majority comes from this process, then they implement the platform. They all signed off on it, after all. If there’s no clear majority, then people come up with a coalition agreement, which combines the platforms of multiple parties into some unholy mess that they can all agree to pass. In either case, the government agenda is clear.
The problem here is that there are people in each party on either side of the Brexit referendum. Some of them feel bound by the referendum results and some don’t, but even though its results were incorporated into party platforms, it still feels like a live issue to many MPs in a way that most issues in their platform just don’t.
It’s not even clear that there’s a majority of people in parliament in favour of Brexit. And when you have a government that feels bound by a promise to enact Brexit, but a parliament without a clear majority for any particular deal (or even a majority in favour of Brexit) you’re in for a bad time.
Basically “enact this referendum” and “keep 50% of the house happy” are two different goals and it is very easy to find them mutually incompatible. At this point, it becomes incredibly difficult to govern!
The third problem is Teresa May’s unwillingness to find another deal for the house. I get that there might not be any willingness in Europe to negotiate another deal and that she’s bound by a lot of domestic constraints, but there’s a longstanding tradition that MPs can’t vote on the same bill twice in one parliament. Australia is a rare Westminster system government that allows it, but only for bills that the senate rejects and with the caveat that a second rejection can be used to trigger an election.
This tradition exists so that the government can’t deadlock itself trying to get contentious legislation though. By ignoring it, Teresa May is showing contempt for parliament.
If, instead of standing by her bill after it had failed, she sought out some other bill that could get through parliament, she’d obviate the need for parliament to take matters into its own hands. Alternatively, if the Brexit vote had just been a confidence vote in the first place, she’d be able to ask the question of a brand-new parliament, which, if she headed it, presumably would have a popular mandate for her bill.
(And obviously if she didn’t head parliament, we wouldn’t have this particular impasse.)
By ignoring and changing so many parliamentary conventions, the UK has stripped itself of its protections from deadlock, dooming us all to this seemingly endless Brexit Purgatory. At the time of writing, the prediction market PredictIt had the odds of Brexit at less than 2% by Friday and only 50/50 by May 22. May’s own chances are even worse, with only 43% of PredictIt users confident she would still be PM by the start of July.
I hope that parliament comes to its senses and that this is the last thing I’ll feel compelled to write about Brexit. Unfortunately, I doubt that will be the case.
I haven’t written much about Brexit. It’s always been a bit of a case of “not my monkeys, not my circus”. And we’ve had plentyofcircuses on this side of the Atlantic for me to write about.
That said, I do think Brexit is useful for illustrating the pitfalls of this sort of referendum, something I’ve taken to calling “The 50% Problem”.
To see where this problem arises from, let’s take a look at the text of several political referendums:
Should the United Kingdom remain a member of the European Union or leave the European Union? – 2016 UK Brexit Referendum
Do you agree that Québec should become sovereign after having made a formal offer to Canada for a new economic and political partnership within the scope of the bill respecting the future of Quebec and of the agreement signed on June 12, 1995? – 1995 Québec Independence Referendum
Should Scotland be an independent country? – 2014 Scottish Independence Referendum
Do you want Catalonia to become an independent state in the form of a republic? – 2017 Catalonia Independence Referendum, declared illegal by Spain.
What do all of these questions have in common?
Simple: the outcome is much vaguer than the status quo.
During the Brexit campaign, the Leave side promised people everything but the moon. During the run-up to Québec’s last independence referendum, there were promises from the sovereignist camp that Québec would be able to retain the Canadian dollar, join NAFTA without a problem, or perhaps even remain in Canada with more autonomy. In Scotland, leave campaigners promised that Scotland would be able to quickly join the EU (which in a pre-Brexit world, Spain seemed likely to veto). The proponents of the Catalonian referendum pretended Spain would take it at all seriously.
The problem with all of these referendums and their vague questions is that everyone ends up with a slightly different idea of what success will entail. While failure leads to the status quo, success could mean anything from (to use Brexit as an example) £350m/week for the NIH to Britain becoming a hermit kingdom with little external trade.
Some of this comes from assorted demagogues promising more than they can deliver. The rest of it comes from general disagreement among members of any coalition about what exactly their best-case outcome is.
Crucially, this means that getting 50% of the population to agree to a referendum does not guarantee that 50% of the population agrees on what happens next. In fact, getting barely 50% of people to agree practically guarantees that no one will agree on what happens next.
Take Brexit, the only one of the referendums I listed above that actually led to anything. While 51.9% of the UK agreed to Brexit, there is not a majority for any single actual Brexit proposal. This means that it is literally impossible to find a Brexit proposal that polls well. Anything that gets proposed is guaranteed to be opposed by all the Remainers, plus whatever percentage of the Brexiteers don’t agree with that specific form of Brexit. With only 52% of the population backing Leave, the defection of even 4% of the Brexit coalition is enough to make a proposal opposed by the majority of the citizenry of the UK.
This leads to a classic case of circular preferences. Brexit is preferred to Remain, but Remain is preferred to any specific instance of Brexit.
For governing, this is an utter disaster. You can’t run a country when no one can agree on what needs to be done, but these circular preferences guarantee that anything that is tried is deeply unpopular. This is difficult for politicians, who don’t want to be voted out of office for picking wrong, but also don’t want to go back on the referendum.
There are two ways to avoid this failure mode of referendums.
The first is to finish all negotiations before using a referendum to ratify an agreement. This allows people to choose between two specific states of the world: the status quo and a negotiated agreement. It guarantees that whatever wins the referendum has majority support.
This is the strategy Canada took for the Charlottetown Accord (resulting in it failing at referendum without generating years of uncertainty) and the UK and Ireland took for the Good Friday Agreement (resulting in a successful referendum and an end to the Troubles).
The second means of avoiding the 50% problem is to use a higher threshold for success than 50% + 1. Requiring 60% or 66% of people to approve a referendum ensures that any specific proposal after the referendum is completed should have majority support.
This is likely how any future referendum on Québec’s independence will be decided, acknowledging the reality that many sovereignist don’t want full independence, but might vote for it as a negotiating tactic. Requiring a supermajority would prevent Québec from falling into the same pit the UK is currently in.
As the first successful major referendum in a developed country in quite some time, Brexit has demonstrated clearly the danger of referendums decided so narrowly. Hopefully other countries sit up and take notice before condemning their own nation to the sort of paralysis that has gripped Britain for the past three years.