The Battle of the Tsushima Straits is the most underrated moment of historical importance in the 20th century.
We’ve all heard lots of different explanations for the start of the First World War. The standard ones are as follows: Europe was a mess of alliances, imperial powers treated war like a game, and one unlucky arch-duke got offed by anarchists.
Less commonly mentioned is Russia’s lack of international prestige, a situation that made it desperate for military victories at the same time it made the Central Powers contemptuous of Russia’s strength.
Russia was the first country to mobilize in 1914 (with its “period preparatory to war”) after Austria issued an ultimatum to Serbia and it was arguably this mobilization that set the stage for a continent spanning war.
Why was Russia so desperate and the Central Powers so unworried?
Well, over 24 hours on May 27/28th, 1905, Russia went from the 3rd most powerful naval nation in the world to one that could have barely hoped to defeat the Austro-Hungarian Empire at sea (that doesn’t sound bad, until you remember that Austria-Hungary has no blue water harbours and never really had any overseas colonies). This wrecked Russian prestige.
What destroyed the Russian fleet so thoroughly?
Admiral Tōgō and the Imperial Japanese fleet.
In the Battle of the Tsushima Straits, Admiral Tōgō defeated and sunk or captured eleven battleships and twenty-seven other ships – practically every Russian naval vessel – at the cost of three torpedo boats (the smallest and cheapest ships used in early 20th century naval combat).
This lopsided victory was the first time a European power was conclusively beaten by an Asian one in an even battle since the Mongol general Subutai razed Hungary and smashed the armies of Poland in the 1200s.
Victory galvanized Japan. Barely fifty years before the battle, Japan had been forced open at gunpoint by Commadore Perry’s Black Ships. Shortly after this, western powers forced Japan, like China before it, to sign unequal treaties. Victory at the Battle of Tsushima showed that this era was clearly over. Japan was now a great power.
This is why I could claim that the Battle of the Tsushima Straits is the most underrated moment of historical importance in the 20th century. Not only did Russia’s defeat sow some of the seeds of the First World War; Japan’s victory also set the stage for Japan’s participation in the Second World War.
Admiral Tōgō’s message to Tokyo on the day of the battle, “In response to the warning that enemy ships have been sighted, the Combined Fleet will immediately commence action and attempt to attack and destroy them. Weather today fine but high waves.”, especially the last part, became as important to the Japanese Navy as Nelson’s remarks before Trafalgar (“England expects that every man will do his duty”) were to the British.
With such a lopsided victory under their belt, the Imperial Japanese Navy began to believe that they were invincible. They quickly became promoters of militarism and conquest.
As America began to act to check Japanese dominance in the Pacific and prevent Japan from entirely colonizing China, the Japanese Navy decided that America had to be defeated. This led to Japan taking Germany’s side in the Second World War, to Pearl Harbour, and eventually to the American occupation of Japan.
Had the Battle of the Tsushima Strait instead been a bloody stalemate, Japan may have risen less quickly and more cautiously. Russia may not have started the First World War when it did, nor succumbed to a revolution when exhausted by the same war. The Soviet Union may never have risen. Both World Wars may have happened differently, or not at all.
This is not even to mention that British naval observers at the battle used what they learned in the construction of Dreadnaught, the battleship that started a new naval arms race.
There’s too much that spilled from all of these events to predict if the world would be better or worse if Tōgō hadn’t won in 1905, but it certainly would have been different.
Today is a good day to reflect on how this single battle, the only decisive time battleships ever met in anger, helped to shape so much of the modern world. If this single moment, unknown to so many, shaped so much of what came later, what other key moments are we ignorant of? What other desperate struggles and last second decisions shaped this baffling world of ours?
History doesn’t just belong to the victors. It belongs to those who are remembered. Today, I’d like to remind you that even if events fall from history and aren’t remembered, they can still shape it.
The modern field of linguistics dates from 1786, when Sir Willian Jones, a British judge sent to India to learn Sanskrit and serve on the colonial Supreme Court, realized just how similar Sanskrit was to Persian, Latin, Greek, Celtic, Gothic, and English (yes, he really spoke all of those). He concluded that the similarities in grammar were too close to be the result of chance. The only reasonable explanation, he claimed, was the descent of these languages from some ancient progenitor.
This ancestor language is now awkwardly known as Proto-Indo-European (PIE). It and the people who spoke it are the subject of David Anthony’s book The Horse The Wheel And Language . I picked up the book hoping to learn a bit about really ancient history. I ended up learning some of that, but this is more a book about linguistics and archeology than about history.
Proto-Indo-European speakers produced no written works, so almost all of their specific history is lost. The oldest products of their daughter languages – like the Rig Veda – date from well after the last speakers of the original language passed away.
Instead of the history that is largely barred to us, this book is really Professor David Anthony attempting to figure out who these speakers were and what their lives looked like, without the benefit of any written words. He does this via two channels: their language, and the physical remains of their culture.
Unfortunately, there is at least one glaring problem with each approach. Their language is thoroughly dead and there was (at the time of writing) no scholarly consensus on where they originated.
Professor Anthony is undaunted by these problems. It turns out that we can reconstruct their language and from that reconstruction, determine where they most likely lived. If both approaches are done properly, it should be possible to see archeological details reflected in their language and details of their language reflected in their remains.
The first problem to solve then is the reconstruction of PIE. How does one do this?
Well it turns out that all languages change in similar ways. The way we pronounce consonants often shift, with hard sounds sometimes changing into soft sounds, but very rarely the reverse. How we say words also changes. Assimilation occurs because we tend to omit difficult to pronounce or inconvenient middle syllables (this has led to the invention of contractions in English) and addition happens because we add syllables in the middle of difficult tongue movements (compare the “proper” and colloquial ways of pronouncing the word “nuclear” or the difference between the French athlète and the English athlete).
It would be very odd for an additional syllable to be added in an area where tongue movements aren’t particularly hard, or a syllable to be removed from a word that is typically enunciated. Above all, these changes are regular because they rely on predictable laziness.
Changes tend to happen to many words at once. When people began to hear the Proto-French tsentum (root of cent, the French word for 100) as different from the Latin kentum, they had to make a decision about how exactly it would be pronounced. They chose a soft-c, a sound Latin lacks, but that is easier to say. This change got carried over to every ts-, c-, or k-, that had previously made the same sound as kentum/tsentum, except those before a back vowel (like “o”), presumably because a soft sound there is actually harder to say .
There’s one final type of change that Anthony mentions: analogy. This is where a grammatical rule used in a single place (e.g. pluralization with -s or -es) is expanded to encompass many more words or cases (most English nouns were originally pluralized with other suffixes, or with stem changes like “geese”; it was only later that people decided -s and -es would be the general markers of plural nouns).
If you have a large sample of languages descended from a historical language (and with Proto-Indo-European, there really is no lack), you can follow a bunch of words backwards through likely changes and see if they all end up in the same place.
If you do this for the modern words for “hundred” from many PIE daughter languages, you’re left with *km’tom (an asterisk is used before sounds where there is no direct evidence). All words for hundred in modern descendants (as well as dead ancient descendants that we know how to speak) of Proto-Indo-European can be derived from *km’tom using only well-attested to and empirically observed rules of language change.
(I occasionally got chills reading reconstructed words. It’s amazing how some words that our distant ancestors spoke thousands upon thousands of years ago are fairly well preserved in our modern speech.)
This is pretty cool, because it allows us to start seeing which words were common enough in Proto-Indo-European to be passed down to all daughters and which words were borrowed in.
With a reconstructed vocabulary of about 1,500 words, we can figure out some things that were important to Proto-Indo-Europeans. They seem to have words for relatives on the male side, but not the female side. This suggests that after marriage, the wife moved in with the groom. Less domestically, they seemed to have a word for cattle rustling, suggesting that they weren’t unfamiliar with increasing their wealth at the expense of their neighbours’.
That’s not all we can get from their words. Linguists also believe that Proto-Indo-Europeans had chiefs, who in turn had patrons. They worshipped a male sky deity and sacrificed horses and cattle to him. They formed warrior bands. They avoided speaking the name of the bear. They drove, or knew of, wagons. And they had two words that we could translate as sacred, “that which is forbidden” and “that which is imbued with holiness”.
(There are many more minor cultural touchstones scattered throughout the book. I don’t want to spoil them all.)
We also know the animals and plants they had words for. Reconstructed PIE has words for temperate trees, horses and cows, bees and honey.
These give us clues to where they lived, in the same way that knowing the words “shinney”, “hockey”, “Zamboni” and “creek” are spoken somewhere might help you make a guess as to where that somewhere is.
And while these words help us rule out the Mediterranean and the deserts, they don’t give us much in the way of a specific location without a when, which requires two different methods.
First, we can figure out the approximate death of Proto-Indo-European, the approximate century or millennium when it was entirely splintered into its daughters, by using what linguists have discovered about the rate of language change.
While most vocabulary changes rather quickly, making this a poor tool for dating very old languages, there are a group of words, the core vocabulary, that change much more slowly. The core vocabulary of any language is only a couple hundred words, but they’re some of the most important ones. Normally, core vocabulary includes the words for: body parts, small numbers, close relatives, a few basic needs, a couple of natural features or domesticated animals, some pronouns, and some conjunctions.
English, a prolific borrower, has borrowed 50% of its total vocabulary from the romance languages. It’s core vocabulary, however, is largely free of this borrowing, with only 4% of core vocabulary words borrowed from romance languages.
Core vocabulary changes by about 14-19% every thousand years depending on the language. It’s also known that once two dialects differ by more than 10% of their core vocabulary, they are more properly thought of as separate languages.
Here’s where written language comes in handy. By comparing written inscriptions with known creation dates in different daughter languages, we can make a guess as to when the languages diverged.
The oldest inscriptions in a PIE-derived language are in the Anatolian languages (which were spoken in what is now Turkey). However, Anthony chooses not to use these, because they entirely lack many grammatical innovations that are otherwise common in daughter languages. This leads him to believe that they split away much earlier than other daughters. The presence of later shared innovations means that at the time of the Anatolian split, Proto-Indo-European was probably still a living language and still evolving.
Better candidates are archaic Greek and Old-Indic, both of which have inscriptions dated to around 1,450 BCE. By comparing the differences in wording and grammar between these two and using known rates of change, Anthony dates the end of Proto-Indo-European at around 2,500 BCE. This means that after 2,500 BCE, it doesn’t make sense to speak of a single unified Proto-Indo-European language.
Second is the birth date, the other half of the critical window. To find it, Anthony looks for words that have a known date of invention, specifically “wool” and “wagon”. Getting broadly useful amounts of wool from sheep wasn’t possible until a mutation made sheep coats much larger. We know roughly when this mutation occurred, because sheep suddenly became a larger portion of herds around 3,500 BCE, displacing goats (which produce more milk). The only reasonably explanation for this event is the advent of wool producing sheep, which were very valuable as a source of clothes.
Similarly, wagons have left physical evidence (both directly and in preserved images) and that evidence has been carbon dated to 3,500 BCE .
Since all Proto-Indo-European languages outside of the Anatolian branch have related words for both “wagon” and “wool” that show no evidence of borrowing from other languages, it seems reasonable to conclude that some form of the language existed when wagons and wool first began to reshape the pre-historic world. That means the language had to exist by 3,500 BCE.
There is, I should note, one competing theory that Anthony outlines, in which PIE and Indo-Hittite languages split around 7,500 BCE. This theory requires several unlikely things to happen however; it requires the word for wagon to evolve from the same verb meaning “to turn” in both branches (five similar verbs existed), it requires the PIE speaking people to disperse over all of Europe and become the dominant culture then (this would have been very hard pre-horse domestication, when material cultures were small and language territories tended to be much smaller than modern countries), and all of this would have to happen while material cultures were becoming very different but languages (supposedly) weren’t evolving.
Anthony doesn’t give this theory much credence.
With a rough time-range, we can begin looking for our Proto-Indo-Europeans in space. Anthony does this by looking for evidence of very old loan words. He finds a set coming from Uralic, which also has a bevy of very old loanwords from PIE .
Uralic (appropriately) probably first emerged somewhere near the Ural Mountains. This corresponds well with our other evidence because the area around the Urals (where borrowing could have taken place) is temperate and home to the flora and fauna words we know exist in PIE.
The PIE word for honey, *médhu (note its similarity with the English word for a fermented honey drink, “mead” ), is particularly useful here. We know that bees weren’t common in Siberia during the time when we suspect PIE was being spoken (and where they were common, the people weren’t herders), but that bees were common on the other side of the Urals.
Laying it all out, we see that PIE speakers were herders (there’s an expansive set of words relating to the tasks herders must accomplish), who lived near the Urals but not in Siberia. The best archeological match for these criteria is a set of herder people who lived in what is now modern-day Ukraine and it is these people that Anthony identifies as the Proto-Indo-Europeans.
If this feels at all dry, I want to assure you that it wasn’t when I read it. I felt that the first section of the book was the strongest. Anthony provides an excellent overview of linguistics, archeology, and some of the crazy stuff he’s had to invent to help him in his studies.
For example, he believes that horses were ridden much earlier than was commonly thought, perhaps around or before 3,500 BCE. To prove this, him and his wife embarked on a study of how bits wear teeth in horses’ mouths, which culminated in empirical studies with a variety of bit types (including rope) done on live horses that had never been previously given bits, assessed using electron microscopy. The whole thing is a bit bonkers, but it has resulted in a validated test that allows archeologists to determine if a given horse was ever ridden, as well as vindication for Anthony’s chronology of domestication.
Unfortunately, a lot of the rest of the book was genuinely dry. There was a dizzying array of cultures inhabiting the Eurasian steppes in the period Anthony covers, each with their own house type, pottery type, antecedents, and descendants. Anthony goes through these in excruciating detail. It’s the sort of thing that other archeologists love him for – a lot of these cultures are very poorly described outside of Russian language publications – but it’s hard for a lay-person to follow. I may have pulled it off if I built a giant flow chart, but as it was, I mostly felt overwhelmed.
(Anthony has to go through them all to explain how PIE-derived languages ended up everywhere we know them to have. People of Europe don’t speak PIE-derived languages just because of Latin. Many people the Romans conquered spoke languages that were distantly related to the invader’s tongue. Those languages need to be accounted for in any theory about Proto-Indo-Europeans.)
This is disappointing, because the history started off so engagingly. Anthony outlines how the earliest ancestors of the Proto-Indo-Europeans had persistent cultural frontiers with hunter-gatherers on the Urals on one side and the farmers in the Bug-Dniester valley on the other.
The herding and farming economies required a moral shift from previous hunter-gatherer practices, one that would see agriculturalists harden their hearts to their own children starving, if the only thing that could assuage their hunger was their last few breeding pairs or their seed grain. This is the first time I saw someone lay out the moral transformation necessary to accept agricultural and having it laid out so starkly made it much easier to understand why not every pre-historic group was willing to adopt it.
(I had always thought the biggest moral change was accepting accumulation of wealth, but this one is, I think, more important.)
This is not to say that the herders and farmers were exactly alike; their different ways of life meant they were culturally distinct. In addition to their dwellings and material culture, they differed in funeral customs and probably in religion. Everything we know about early-PIE speakers suggest that they worshipped a sky god of some sort. The farmers who lived next door decorated their houses with female figurines, figures that never show up in any excavation of herder camps or grave sites.
I was also shocked at the amount of long distance trade and the wealth acquisition that was going on 6,000 years ago. There are kurgans (circular rock topped graves) with grave goods from Mesopotamia dating from that long ago, as well as one kurgan where someone was buried with almost 4 kilograms of gold ornamentation.
The herders and farmers didn’t live next door in harmony forever. Changes to their stable arrangement happened as a result of one of the Earth’s period historical climate fluctuations (which caused a collapse among many of the farmers and may have led to more raiding from the early-PIE speaking herders) and later the adoption of horse-riding (which made raiding easier) and wagons (which allowed herders to bring water with them and opened the inner steppes up to grazing).
Larger herds and changing boundaries led to clashes among the herders (we’ve found kurgans where the bodies bear marks of violent deaths) and to raids on agriculturalists (we’ve found burned villages peppered with arrows), although interestingly, never the farmers directly adjacent to the steppes. It may be that the herders didn’t want to disrupt their trading relationships with their neighbours and so were careful to raid dozens of kilometers away from their own borders (a task made easier with horses).
The farmers were no pushovers; some of their towns held up to 10,000 people by the third millennium BCE. These towns were bigger than the cities of Mesopotamia, but lacked the civic organizational features of the true cities of the Fertile Crescent.
And it was at about this point in the narrative where the number of cultures proliferated beyond my ability to follow and I began writing down interesting facts rather than keeping track of the grand narrative.
Here are a few that I liked the most:
About 20% of corpses in warrior graves (those with weapons and other symbols of membership in warrior society) whose gender is known are female. This matches the percentage in much later steppe graves. As Kameron Hurley said, women have always fought.
Contrary to popular stereotypes, the cultures of the Eurasia steppes weren’t reliant on cities for manufactured goods. They had their own potters and metalsmiths and they made many mining camps. In fact, by the 2000s BCE, it seems that Mesopotamian cities were dependent metal mined on the steppes,
In the early Bronze Age, tin was worth its weight in silver. When tin wasn’t available, bronze was made with arsenic.
Horses were probably domesticated because they winter better than the other animals that were available in Eurasia at the time. Cows will starve to death if grass is hidden by snow, while sheep and goats use their nose to move snow off of grass (which means that they’re helpless once it’s covered in ice). Sheep, cows, and goats are all unable to drink water that is covered in ice. Horses break ice and move snow with their hooves, making winter no real inconvenience to them. Mixing horses with cows can allow cows to eat the grass that horses uncover.
Disaffected farmers may have been attracted to the herding economy because wealth was much easier to build up. Farmland is hard to acquire more of without angering your neighbours, but herds given good pasture will naturally grow exponentially. A lot of the spread of the herding economy into Europe probably used some sort of franchise system, where locals joined the PIE culture and were given some animals, in exchange for providing protection and labour to their patron.
I’ve struggled through a lot of books that are clearly meant for people more knowledgeable in the subject than I am. It might just be a function of how interested I am in archeology (that is to say: only tolerably interested) that this is the first of them that I wish had an abridged edition. If you aren’t deeply interested in archaeology or pre-history, there’s a lot of this book that you’ll probably end up skimming.
The rest of it makes up for that. But I think there would be market for Anthony to write another leaner volume, meant for a more general audience.
If he ever does, I’ll probably give it a read.
 David Anthony is very sensitive to the political ends that some scholars of Proto-Indo-European have turned to. He acknowledges that white supremacists appropriated the self-designation of “Aryan” used by some later speakers of PIE-derived languages and used it to refer to some sort of ancient master race. Professor Anthony does not buy into this one bit. He points out that Aryan was always a cultural term, not a racial one (showing the historical ignorance of the racists) and he is careful to avoid assigning any special moral or mythical virtue to the Proto-Indo-Europeans whose culture he studies.
White supremacists will find nothing to like about this book, unless they engage in a deliberate misreading. ^
 This is why the French côte is still similar to the Latin costa. ^
 Anthony identifies improvements in carbon dating, especially improvements in how we calibrate for diets high in fish (which contain older carbon, leading to incorrect ages) as a major factor in his ability to untangle the story of the Proto-Indo-Europeans. ^
 Uralic is the language family that in modern times includes Finnish and some languages spoken in Russia. ^
 While looking up the word *médhu, I found out that it is also likely the root of the Old Chinese word for honey, via an extinct Proto-Indo-European language, Tocharian. The speakers of Tocharian migrated from the Proto-Indo-European homeland to Xinjiang, in what is now China, which is likely where the borrowing took place. ^
A friend of mine recently linked to a story about stamp scrip currencies in a discussion about Initiative Q. Stamp scrip currencies are an interesting monetary technology. They’re bank notes that require weekly or monthly stamps in order to be valid. These stamps cost money (normally a few percent of the face value of the note), which imposes a cost on holding the currency. This is supposed to encourage spending and spur economic activity.
This isn’t just theory. It actually happened. In the Austrian town of Wörgl, a scrip currency was used to great effect for several months during the Great Depression, leading to a sudden increase in employment, money for necessary public works, and a general reversal of fortunes that had, until that point, been quite dismal. Several other towns copied the experiment and saw similar gains, until the central bank stepped in and put a stop to the whole thing.
In the version of the story I’ve read, this is held up as an example of local adaptability and creativity crushed by centralization. The moral, I think, is that we should trust local institutions instead of central banks and be on the lookout for similar local currency strategies we could adopt.
If this is all true, it seems like stamp scrip currency (or some modern version of it, perhaps applying the stamps digitally) might be a good idea. Is this the case?
My first, cheeky reaction, is “we already have this now; it’s called inflation.” My second reaction is actually the same as my first one, but has an accompanying blog post. Thus.
Currency arrangements feel natural and unchanging, which can mislead modern readers when they’re thinking about currencies used in the 1930s. We’re very used to floating fiat currencies, that (in general) have a stable price level except for 1-3% inflation every year.
This wasn’t always the case! Historically, there was very little inflation. Currency was backed by gold at a stable ratio (there were 23.2 grains of gold in a US dollar from 1834 until 1934). For a long time, growth in global gold stocks roughly tracked total growth in economic activity, so there was no long-run inflation or deflation (short-run deflation did cause several recessions, until new gold finds bridged the gap in supply).
During the Great Depression, there was worldwide gold hoarding . Countries saw their currency stocks decline or fail to keep up with the growth rate required for full economic activity (having a gold backed currency meant that the central bank had to decrease currency stocks whenever their gold stocks fell). Existing money increased in value, which meant people hoarded that too. The result was economic ruin.
In this context, a scrip currency accomplished two things. First, it immediately provided more money. The scrip currency was backed by the national currency of Austria, but it was probably using a fractional reserve system – each backing schilling might have been used to issue several stamp scrip schillings . This meant that the town of Wörgl quickly had a lot more money circulating. Perhaps one of the best features of the scrip currency within the context of the Great Depression was that it was localized, which meant that it’s helpful effects didn’t diffuse.
(Of course, a central bank could have accomplished the same thing by printing vastly more money over a vastly larger area, but there was very little appetite for this among central banks during the Great Depression, much to everyone’s detriment. The localization of the scrip is only an advantage within the context of central banks failing to ensure adequate monetary growth; in a more normal environment, it would be a liability that prevented trade.)
Second to this, the stamp scrip currency provided an incentive to spend money.
Here’s one model of job loss in recessions: people (for whatever reason; deflation is just one cause) want to spend less money (economists call this “a decrease in aggregate demand”). Businesses see the falling demand and need to take action to cut wages or else become unprofitable. Now people generally exhibit “downward nominal wage rigidity” – they don’t like pay cuts.
Furthermore, individuals don’t realize that demand is down as quickly as businesses do. They hold out for jobs at the same wage rate. This leads to unemployment .
Stamp scrip currencies increase aggregate demand by giving people an incentive to spend their money now.
Importantly, there’s nothing magic about the particular method you choose to do this. Central banks targeting 2% inflation year on year (and succeeding for once ) should be just as effective as scrip currencies charging 2% of the face value every year . As long as you’re charged some sort of fee for holding onto money, you’re going to want to spend it.
Central bank backed currencies are ultimately preferable when the central bank is getting things right, because they facilitate longer range commerce and trade, are administratively simpler (you don’t need to go buy stamps ever), and centralization allows for more sophisticated economic monitoring and price level targeting .
Still, in situations where the central bank fails, stamp scrip currencies can be a useful temporary stopgap.
That said, I think a general caution is needed when thinking about situations like this. There are few times in economic history as different from the present day as the Great Depression. The very fact that there was unemployment north of 20% and many empty factories makes it miles away from the economic situation right now. I would suspect that radical interventions that were useful during the Great Depression might be useless or actively harmful right now, simply due to this difference in circumstances.
 My opinion is that their marketing structure is kind of cringey (my Facebook feed currently reminds me of all of the “Paul Allen is giving away his money” chain emails from the 90s and I have only myself to blame) and their monetary policy has two aims that could end up in conflict. On the other hand, it’s fun to watch the numbers go up and idly speculate about what you could do if it was worth anything. I would cautiously recommend Q ahead of lottery tickets but not ahead of saving for retirement. ^
 See “The Midas Paradox” by Scott Sumner for a more in-depth breakdown. You can also get an introduction to monetary theories of the business cycle on his blog, or listen to him talk about the Great Depression on Vimeo. ^
 The size of the effect talked about in the article suggests that one of three things had to be true: 1) the scrip currency was fractionally backed, 2) Wörgl had a huge bank account balance a few years into the recession, or 3) the amount of economic activity in the article is overstated. ^
 As long as inflation is happening like it should be, there won’t be protracted unemployment, because a slight decline in economic activity is quickly counteracted by a slightly decreased value of money (from the inflation). Note the word “nominal” up there. People are subject to something called a “money illusion”. They think in terms of prices and salaries expressed in dollar values, not in purchasing power values.
There was only a very brief recession after the dot com crash because it did nothing to affect the money supply. Inflation happened as expected and everything quickly corrected to almost full employment. On the other hand, the Great Depression lasted as long as it did because most countries were reluctant to leave the gold standard and so saw very little inflation. ^
 Here’s an interesting exercise. Look at this graph of US yearly inflation. Notice how inflation is noticeably higher in the years immediately preceding the Great Recession than it is in the years afterwards. Monetarist economists believe that the recession wouldn’t have lasted as long if it there hadn’t been such a long period of relatively low inflation.
 You might wonder if there’s some benefit to both. The answer, unfortunately, is no. Doubling them up should be roughly equivalent to just having higher inflation. There seems to be a natural rate of inflation that does a good job balancing people’s expectations for pay raises (and adequately reduces real wages in a recession) with the convenience of having stable money. Pushing inflation beyond this point can lead to a temporary increase in employment, by making labour relatively cheaper compared to other inputs.
The increase in employment ends when people adjust their expectations for raises to the new inflation rate and begin demanding increased salaries. Labour is no longer artificially cheap in real terms, so companies lay off some of the extra workers. You end up back where you started, but with inflation higher than it needs to be.
I write today about a speech that was once considered the greatest political speech in American history. Even today, after Reagan, Obama, Eisenhower, and King, it is counted among the very best. And yet this speech has passed from the history we have learned. Its speaker failed in his ambitions and the cause he championed is so archaic that most people wouldn’t even understand it.
I speak of Congressman Will J Bryan’s “Cross of Gold” speech.
William Jennings Bryan was a congressman from Nebraska, a lawyer, a three-time Democratic candidate for president (1896, 1900, 1908), the 41st Secretary of State, and oddly enough, the lawyer for the prosecution at the Scopes Monkey Trial. He was also a “silver Democrat”, one of the insurgents who rose to challenge Democratic President Grover Cleveland and the Democratic party establishment over their support for gold over a bimetallic (gold plus silver) currency system.
The dispute over bimetallic currency is now more than a hundred years old and has been made entirely moot by the floating US dollar and the post-Bretton Woods international monetary order. Still, it’s worth understanding the debate about bimetallism, because the concerns Bryan’s speech raised are still concerns today. Once you understand why Bryan argued for what he did, this speech transforms from dusty history into still-relevant insights into live issues that our political process still struggles to address.
When Alexander Hamilton was setting up a currency system for the United States, he decided that there would be a bimetallic standard. Both gold and silver currency would be issued by the mint, with the US Dollar specified in terms of both metals. Any citizen could bring gold or silver to the mint and have it struck into coins (for a small fee, which covered operating costs).
Despite congressional attempts to tweak the ratio between the metals, problems often emerged. Whenever gold was worth more by weight than it was as currency, it would be bought using silver and melted down for profit. Whenever the silver dollar was undervalued, the same thing happened to it. By 1847, the silver in coins was so overvalued that silver coinage had virtually disappeared from circulation and many people found themselves unable to complete low-value transactions.
Congress responded by debasing silver coins, which led to an increase in the supply of coins and for a brief time, there was a stable equilibrium where people actually could find and use silver coins. Unfortunately, the equilibrium didn’t last and the discovery of new silver deposits swung things in the opposite direction, leading to fears that people would use silver to buy gold dollars and melt them down outside the country. Since international trade was conducted in gold, it would have been very bad for America had all the gold coins disappeared.
Congress again responded, this time by burying the demonetization of several silver coins (including the silver dollar) in a bill that was meant to modernize the mint. The logic here was that no one would be able to buy up any significant amount of gold if they had to do it in nickels. Unfortunately for congress, a depression happened right after they passed the bill.
Some people blamed the depression on the change in coinage and popular sentiment in some corners became committed to the re-introduction of the silver dollar.
The silver supplies that caused this whole fracas hadn’t gone anywhere. People knew that re-introducing silver would have been an inflationary measure, as the statutory amount of silver in a dollar would have been worth about $0.75 in gold backed currency, but they largely didn’t care – or viewed that as a positive. The people clamouring for silver also didn’t conduct much international trade, so they didn’t mind if silver currency drove out gold and made trade difficult.
There were attempts to remonetize the silver dollar over the next twenty years, but they were largely unsuccessful. A few mine owners found markets for their silver at the mint when law demanded a series of one-off runs of silver coins, but congress never restored bimetallism to the point that there was any significant silver in circulation – or significant inflation. Even these limited silver-minting measures were repealed in 1893, which left the United States on a de facto gold standard.
For many, the need for silver became more urgent after the Panic of 1893, which featured everything a good Gilded Age panic normally did – bank runs, failing railways, declines in trade, credit crunches, a crash in commodity prices, and the inevitable run on the US gold reserves.
The commodity price crash hit farmers especially hard. They were heavily indebted and had no real way to pay it off – unless their debts were reduced by inflation. Since no one had found any large gold deposits anywhere (the Klondike gold rush didn’t actually produce anything until 1898 and the Fairbanks gold rush didn’t occur until 1902), that wasn’t going to happen on the gold standard. The Democrat grassroots quickly embraced bimetallism, while the party apparatus remained supporters of the post-1893 de facto gold standard.
This was the backdrop for Bryan’s Cross of Gold speech, which took place during summer 1896 at the Democratic National Convention in Chicago. He was already a famed orator and had been petitioning members of the party in secret for the presidential nomination, but his plans weren’t well known. He managed to go almost the entire convention without giving a speech. Then, once the grassroots had voted out the old establishment and began hammering out the platform, he arranged to be the closing speaker representing the delegates (about 66% of the total) who supported official bimetallism.
The convention had been marked by a lack of any effective oratory. In a stunning ten-minute speech (that stretched much longer because of repeated minutes-long interruptions for thunderous applause) Bryan singlehandedly changed that and won the nomination.
And this whole thing, the lobbying before the convention and the carefully crafted surprise moment, all of it makes me think of how effective Aaron Swartz’s Theory of Change idea can be when executed correctly.
Theory of Change says that if there’s something you want to accomplish, you shouldn’t start with what you’re good at and work towards it. You should start with the outcome you want and keep asking yourself how you’ll accomplish it.
Bryan decided that he wanted America to have a bimetallic currency. Unfortunately, there was a political class united in its opposition to this policy. That meant he needed a president that favoured it. Without the president, you need to get 66% of Congress and the Senate onboard and that clearly wasn’t happening with the country’s elites so hostile to silver.
Okay, well how do you get a president who’s in favour of restoring silver as currency? You make sure one of the two major parties nominates a candidate in favour of it, first of all. Since the Republicans (even then the party of big business) weren’t going to do it, it had to be the Democrats.
That means the question facing Bryan became: “how do you get the Democrats to pick a presidential candidate that supports silver?”
And this question certainly wasn’t easy. Bryan on his own couldn’t guarantee it, because it required delegates at least sympathetic to the idea. But there was a national backdrop such that that seemed likely, as long as there was a good candidate all of the “silver men” could unite around.
So, Bryan needed to ensure there was a good candidate and that that candidate got elected. Well, that was a problem, because neither of the two leading silver candidates were very popular. Luckily, Bryan was a Democrat, a former congressman, and kind of popular.
I think this is when the plan must have crystalized. Bryan just needed to deliver a really good speech to an already receptive audience. With the cachet from an excellent speech, he would clearly become the choice of silver supporting Democrats, become the Democratic party presidential candidate, and win the presidency. Once all that was accomplished, silver coins would become money again.
The fantastic thing is that it almost worked. Bryan was nominated on the Democratic ticket, absorbed the Populist party into the Democratic party to prevent a vote split, and came within 600,000 votes of winning the presidency. All because of a plan. All because of a speech.
So, what did he say?
Well, the full speech is available here. I do really recommend it. But I want to highlight three specific parts.
A Too Narrow Definition of “Business”
We say to you that you have made the definition of a business man too limited in its application. The man who is employed for wages is as much a business man as his employer; the attorney in a country town is as much a business man as the corporation counsel in a great metropolis; the merchant at the cross-roads store is as much a business man as the merchant of New York; the farmer who goes forth in the morning and toils all day—who begins in the spring and toils all summer—and who by the application of brain and muscle to the natural resources of the country creates wealth, is as much a business man as the man who goes upon the board of trade and bets upon the price of grain; the miners who go down a thousand feet into the earth, or climb two thousand feet upon the cliffs, and bring forth from their hiding places the precious metals to be poured into the channels of trade are as much business men as the few financial magnates who, in a back room, corner the money of the world. We come to speak of this broader class of business men.
In some ways, this passage is as much the source of the mythology of the American Dream as the inscription on the statue of liberty. Bryan rejects any definition of businessman that focuses on the richest in the coastal cities and instead substitutes a definition that opens it up to any common man who earns a living. You can see echoes of this paragraph in almost every presidential speech by almost every presidential candidate.
Think of anyone you’ve heard running for president in recent years. Now read the following sentence in their voice: “Small business owners – like Monica in Texas – who are struggling to keep their business running in these tough economic times need all the help we can give them”. It works because “small business owners” has become one of the sacred cows of American rhetoric.
Bryan added this line just days before he delivered the speech. It was the only part of the whole thing that was at all new. And because this speech inspired a generation of future speeches, it passed into the mythology of America.
Trickle Down or Trickle Up
Mr. Carlisle said in 1878 that this was a struggle between “the idle holders of idle capital” and “the struggling masses, who produce the wealth and pay the taxes of the country”; and, my friends, the question we are to decide is: Upon which side will the Democratic party fight; upon the side of “the idle holders of idle capital” or upon the side of “the struggling masses”? That is the question which the party must answer first, and then it must be answered by each individual hereafter. The sympathies of the Democratic party, as shown by the platform, are on the side of the struggling masses who have ever been the foundation of the Democratic party. There are two ideas of government. There are those who believe that, if you will only legislate to make the well-to-do prosperous, their prosperity will leak through on those below. The Democratic idea, however, has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.
Almost a full century before Reagan’s trickle-down economics, Democrats were taking a stand against that entire world-view. Through all its changes – from the party of slavery to the party of civil rights, from the party of the Southern farmers to the party of “coastal elites” – the Democratic party has always viewed itself as hewing to this one simple principle. Indeed, the core difference between the Republican party and the Democratic party may be that the Republican party views the role of government to “get out of the way” of the people, while the Democratic party believes that the job of government is to “make the masses prosperous”.
A Cross of Gold
Having behind us the producing masses of this nation and the world, supported by the commercial interests, the laboring interests, and the toilers everywhere, we will answer their demand for a gold standard by saying to them: “You shall not press down upon the brow of labor this crown of thorns; you shall not crucify mankind upon a cross of gold.
This is perhaps the best ending to a speech I have ever seen. Apparently at the conclusion of the address, dead silence endured for several seconds and Bryan worried he had failed. Two police officers in the audience were ahead of the curve and rushed Bryan – so that they could protect him from the inevitable crush.
Bryan turned what could have been a dry, dusty, nitty-gritty issue into the overriding moral question of his day. In fact, by co-opting the imagery of the crown of thorns and the cross, he tapped into the most powerful vein of moral imagery that existed in his society. Invoking the cross, the central mystery and miracle of Christianity cannot but help to put (in a thoroughly Christian society) an issue on a moral footing, as opposed to an intellectual one.
This sort of moral rather than intellectual posture is a hallmark of any insurgency against a technocratic order. Technocrats (myself among them!) like to pretend that we can optimize public policy. It is, to us, often a matter of just finding the solution that empirically provides the greatest good to the greatest number of people. Who could be against that?
But by presupposing that the only moral principle is the greatest good for the greatest number, we obviate moral contemplation in favour of tinkering with numbers and variables.
(The most cutting critique of utilitarianism I’ve ever seen delivered was: “[These problems are] seen in the light of a technical or practical difficulty and utilitarianism appeals to a frame of mind in which technical difficulty, even insuperable technical difficulty, is preferable to moral unclarity, no doubt because it is less alarming.”, a snide remark by the great British ethicist Sir Bernard Williams from his half of Utilitarianism for and against.)
This avoiding-the-question-so-we-can-tinker is a policy that can provoke a backlash like Bryan. Leaving aside entirely the difficulty of truly knowing which policies will have “good” results, there’s the uncomfortable truth that not every policy is positive sum. Even positive sum policies can hurt people. Bryan ran for president because questions of monetary policy aren’t politically neutral.
The gold standard, for all the intellectual arguments behind it, was hurting people. Maybe not a majority of people, but people nonetheless. There’s a whole section of the speech where Bryan points out that the established order cannot just say “changes will hurt my business”, because the current situation was hurting other people’s businesses too.
It is very tempting to write that questions of monetary policy “weren’t” politically neutral. After all, there’s a pretty solid consensus on monetary policy these days (well, except for the neo-Fisherians, but there’s a reason no one listens to them). But even (especially) a consensus among experts can be challenged by legitimate political disagreements. When the Fed chose to pull interest rates low as stimulus for the economy after 2008, it put the needs of people trying to find jobs over those of retired people who held their savings in safe bonds.
If you lower speed limits, you make roads safer for law abiding citizens and less safe for people who habitually speed. If you decriminalize drugs, you protect rich techies who microdose on LSD and hurt people who view decriminalization as license to dabble in opiates.
Even the best intentioned or best researched public policy can hurt people. Even if you (like me) believe in the greatest good for the greatest number of people, you have to remember that. You can’t ever let hurting people be easy or unthinking.
Even though it failed in its original aim and even though the cause it promotes is dead, I want people to remember Bryan’s speech. I especially want people who hold power to remember Bryan’s speech. Bryan chose oratory as his vehicle, his way of standing up for people who were hurt by well-intentioned public policy. In 1896, I might have stood against Bryan. But that doesn’t mean I want his speech and the lessons it teaches to be forgotten. Instead, I view it as a call to action, a call to never turn away from the people you hurt, even when you know you are doing right. A call to not forget them. A call to try and help them too.
There is perhaps no temptation greater to the amateur (or professional) historian than to take a set of historical facts and draw from them a grand narrative. This tradition has existed at least since Gibbon wrote The History of the Decline and Fall of the Roman Empire, with its focus on declining civic virtue and the rise of Christianity.
Obviously, it is true that things in history happen for a reason. But I think the case is much less clear that these reasons can be marshalled like soldiers and made to march in neat lines across the centuries. What is true in one time and place may not necessarily be true in another. When you fall under the sway of a grand narrative, when you believe that everything happens for a reason, you may become tempted to ignore all of the evidence to the contrary.
Instead praying at the altar of grand narratives, I’d like to suggest that you embrace the ambiguity of history, an ambiguity that exists because…
Context Is Tricky
Here are six sentences someone could tell you about their interaction with the sharing economy:
I stayed at an Uber last night
I took an AirBnB to the mall
I deliberately took an Uber
I deliberately took a Lyft
I deliberately took a taxi
I can’t remember which ride-hailing app I used
Each of these sentences has an overt meaning. They describe how someone spent a night or got from place A to place B. They also have a deeper meaning, a meaning that only makes sense in the current context. Imagine your friend told you that they deliberately took an Uber. What does it say about them that they deliberately took a ride in the most embattled and controversial ridesharing platform? How would you expect their political views to differ from someone who told you they deliberately took a taxi?
Even simple statements carry a lot of hidden context, context that is necessary for full understanding.
Do you know what the equivalent statements to the six I listed would be in China? How about in Saudi Arabia? I can tell you that I don’t know either. Of course, it isn’t particularly hard to find these out for China (or Saudi Arabia). You may not find a key written down anywhere (especially if you can only read English), but all you have to do is ask someone from either country and they could quickly give you a set of contextual equivalents.
Luckily historians can do the same… oh. Oh damn.
When you’re dealing with the history of a civilization that “ended” hundreds or thousands of years ago, you’re going to be dealing with cultural context that you don’t fully understand. Sometimes people are helpful enough to write down “Uber=kind of evil” and “supporting taxis = very left wing, probably vegan & goes to protests”. A lot of the time they don’t though, because that’s all obvious cultural context that anyone they’re writing to would obviously have.
And sometimes they do write down even the obvious stuff, only for it all to get burned when barbarians sack their city, leaving us with no real way to understand if a sentence like “the opposing orator wore red” has any sort of meaning beyond a statement of sartorial critique or not.
All of this is to say that context can make or break narratives. Look at the play “Hamilton”. It’s a play aimed at urban progressives. The titular character’s strong anti-slavery views are supposed to code to a modern audience that he’s on the same political team as them. But if you look at American history, it turns out that support for abolishing slavery (and later, abolishing segregation) and support for big corporations over the “little guy” were correlated until very recently. In the 1960s though 1990s, there was a shift such that the Democrats came to stand for both civil rights and supporting poorer Americans, instead of just the latter. Before this shift, Democrats were the party of segregation, not that you’d know it to see them today.
Trying to tie Hamilton into a grander narrative of (eventual) progressive triumph erases the fact that most of the modern audience would strenuously disagree with his economic views (aside from urban neo-liberals, who are very much in Hamilton’s mold). Audiences end up leaving the paly with a story about their own intellectual lineage that is far from correct, a story that may cause them to feel smugly superior to people of other political stripes.
History optimized for this sort of team or political effect turns many modern historians or history writers into…
Gaps in context, or modern readers missing the true significance of gestures, words, and acts steeped in a particular extinct culture, combined with the fact that it is often impossible to really know why someone in the past did something mean that some of history is always going to be filled in with our best guesses.
Professor Mary Beard really drove this point home for me in her book SPQR. She showed me how history that I thought was solid was often made up of myths, exaggerations, and wishful thinking on the parts of modern authors. We know much less about Rome than many historians had made clear to me, probably because any nuance or alternative explanation would ruin their grand theories.
When it comes to so much of the past, we genuinely don’t know why things happened.
I recently heard two colleagues arguing about The Great Divergence – the unexplained difference in growth rates between Europe and the rest of the world that became apparent in the 1700s and 1800s. One was very confident that it could be explained by access to coal. The other was just as confident that it could be explained by differences in property rights.
I waded in and pointed out that Wikipedia lists fifteen possible explanations, all of which or none of which could be true. Confidence about the cause of the great divergence seems to me a very silly thing. We cannot reproduce it, so all theories must be definitionally unfalsifiable.
But both of my colleagues had read narrative accounts of history. And these narrative accounts had agendas. One wished to show that all peoples had the same inherent abilities and so cast The Great Divergence as chance. The other wanted to show how important property rights are and so made those the central factor in it. Neither gave much time to the other explanation, or any of the thirteen others that a well trafficked and heavily edited Wikipedia article finds equally credible.
Neither agenda was bad here. I am in fact broadly in favour of both. Yet their effect was to give two otherwise intelligent and well-read people a myopic view of history.
So much of narrative history is like this! Authors take the possibilities they like best, or that support their political beliefs the best, or think will sell the best, and write them down as if they are the only possibilities. Anyone who is unlucky enough to read such an account will be left with a false sense of certainty – and in ignorance of all the other options.
Of course, I have an agenda too. We all do. It’s just that my agenda is literally “the truth resists simplicity“. I like the messiness of history. It fits my aesthetic sense well. It’s because of this sense, that I’d like to encourage everyone to make their next foray into history free of narratives. Use Wikipedia or a textbook instead of a bestselling book. Read something by Mary Beard, who writes as much about historiography as she writes about history. Whatever you do, avoid books with blurbs praising the author for their “controversial” or “insightful” new theory.
Leave, just once, behind those famous narrative works like “Guns, Germs, and Steel” or “The History of the Decline and Fall of the Roman Empire” and pick up something that embraces ambiguity and doesn’t bury messiness behind a simple agenda.
[Content Warning: Discussions of genocide and antisemitism]
Hannah Arendt’s massive study of totalitarianism, The Origins of Totalitarianism, is (at the time of writing), the fourth most popular political theory book on Amazon (after two editions of The Prince, Plato’s Republic, and a Rebecca Solnit book). It’s also a densely written tome, not unsuitable for defending oneself from wild animals. Many of its paragraphs could productively be turned into whole books of their own.
I’m not done it yet. But a review and summary of the whole thing would be far too large for a single blog post. Therefore, I’m going to review its three main sections as I finish them. Hannah Arendt’s Eichmann in Jerusalem set my mind afire and spurred my very first essay on political theory, so I’m very excited to be reviewing the section on antisemitism today.
(Reminder: unless I’m specifically claiming a viewpoint as my own, I am merely summarizing Arendt’s views as I best understand them)
Arendt’s history of antisemitism begins when religious pogroms against Jews ended. Arendt isn’t really interested in this earlier persecution, which she views as entirely distinct from later antisemitism. As far as I can tell, there are two reasons that underlie this distinction. The first is the lack of a political component to the earlier pogroms. Their lack of politicization – there was no one in Christendom who really spoke against them – made them almost by definition politically useless.
For antisemitism to become a rallying cry for a movement, it needed to be more than just antisemitism. It had to also implicate a whole host of people despised by the mob, people who could be expected to stand up against antisemitism, or people who could be compared to Jews so as to focus hatred on them (a practice which continues to this day). The unanimity of the Christian pogroms robbed them of any usage in power struggles between Christians, because any Christian could take up the banner of the pogroms and so divide support for their rivals.
Second, there was always one escape from the Christian pogroms: conversion to Christianity. This escape was notably lacking from later, political antisemitism. Jewishness became a racial stain carried down through the generations, not merely a different religion.
Nowhere is this distinction better seen than between the Vichy government and the occupying Germans. The Germans would ask the Vichy regime to exterminate Jews. And the Vichy government would wipe out foreign Jews, or Jews that didn’t have French citizenship, or Jews that weren’t willing to convert. The French were still somewhat in the old Christian mindset of “good” Jews and “bad” Jews. The Germans wished to exterminate all Jews and made no distinctions between good and bad.
Arendt analyzes this second distinction through the lens of vice and crime. To Arendt, a vice is a crime which has become accepted as inextricably linked to certain people, such that they cannot help but commit it. She describes this as similar to an addict being hooked on drugs.
When you accept that certain people have vices, you may excuse them some of their crimes. According to Arendt, in late 19th century/early 20th century society, a judge would face no opposition to giving a lighter sentence for murder to a gay man, or a lighter sentence for treason to a Jew, because these crimes were viewed to be a matter of racial predestination.
The danger that Arendt identifies here is that this “tolerance” for murder or treason can be quickly reversed. And when this happens, it isn’t enough just to punish the traitors or murderers. Everyone who is racially or dispositionally inclined to these crimes must then be “liquidated”.
Hannah Arendt’s exact phrasing of the threat here is:
It is an attraction to murder and treason which hides behind such perverted tolerance, for in a moment it can switch to a decision to liquidate not only all actual criminals but all who are “racially” predestined to commit certain crimes. Such changes take place whenever the legal and political machine is not separated from society so that social standards can penetrate into it and become political and legal rules. The seeming broad-mindedness that equates crime and vice, if allowed to establish its own code of law, will invariably prove more cruel and inhuman than laws, no matter how severe, which respect and recognize man’s independent responsibility for his behavior.
Having separated modern antisemitism from earlier religious pogroms, Arendt also spends some time separating nationalism from totalitarianism. Nationalism, to Arendt, is always inward focused. It views one’s own nation as best and spurns contact with outsiders. Nationalism may be paranoid and bellicose, but it has no desire to expand, nor any desire to coordinate with foreign nationalists. Totalitarianism, on the other hand, is always focused outwards, its eyes set on world domination.
There were, of course, international organizations of both fascists and communists, the two totalitarian ideologies. But I wonder how nations like North Korea (with no real plausible path to world domination) and Eritrea (which as far as I know is entirely inward focused) fit into this framework. Both are definitely totalitarian, but they seem to falsify this important criterion. I’ll look for more on how to parse those countries when I get to the third and final part of this book, which covers totalitarianism itself.
Let’s pause for a second and ask why a book on totalitarianism is focused so much on antisemitism. One of the most enduring questions of 20th century history is “why were the Jews Hitler’s victims?” Why was this people singled out for destruction and not some other? Was it arbitrary? While Hannah Arendt may have some hindsight bias here, to her the attempt at extermination of the Jews was inevitable in light of the international focus of totalitarian ideologies and the international relationships of European Jews.
While banking may have become less and less Jewish dominated over the course of the 18th and 19th centuries, European Jews (at least the best off) still had an international bent. Arendt relates an anecdote about the end of the Franco-Prussian war in 1871; apparently Bismarck’s approach to terms was basically ‘have their Jews work it out with our Jews’ and she says that this generalizes to the how other treaties were made at the time.
This international network of leading Jews  meant that an antisemitic ideology had to frame itself in international terms to attack Jews, or that an ideology could explain its international bent by attacking Jews. Therefore, by virtue of being a people without a nation (who instead lived in all European nations), European Jews became an excellent justification for an international and expansionist totalitarian power.
I think these rumours of international control were a cruel double bind for the Jewish people: any successful quashing of the rumours of Jewish domination would have just served as proof for the next round, while the failure to quash them, brought about by a very real lack of power, meant that they flourished, despite the fact that their continued existence should have itself been all that was required to prove them false.
The view of Jews as international and of one mind was fueled by the clannishness that came about as a natural result of the social discrimination Jews faced in European society. Anti-Semites could imagine that Jewish endogamy meant that all Jews were of one family and therefore had a single goal, which was normally considered to be “world domination”. If even one member of this global clan was left alive, then the anti-Semites believed that they would have failed.
Antisemitism was a useful tool for whipping up the mob because in early modern times, Jews were despised. Arendt again separates this from the earlier religious hatred and attributes it to Jews losing their old formal position (as the state bankers) but not their “privileges”  or (at least as far as visible Jews, like the Rothschilds were concerned) their wealth. This loss of formal position, but not the wealth it brought, is identified by Arendt as a particularly vulnerable and despised state – it is, she claims, the state the French aristocracy found themselves in before the revolution. Arendt even claims that no one hated the aristocracy so much when they were fulfilling the societal function of oppressing peasants, although I wonder if it might instead be possible that they were then just as (or more hated), but possessed a surer monopoly on violence and discourse, such that the earlier hate was better hidden.
Arendt believes that all of these fault lines were compounded by several strategies that were undertaken by Jews, strategies that had served them well in the old days of forced conversions, but that were extremely maladaptive when faced with modern antisemitism.
First, Arendt reckoned that Jews had a special relationship with the state. They had formerly served the state (not the body politic, mind you, but the state) as its bankers, finding the capital it needed to wage its wars and build its monuments. In exchange for this service, the bankers had won special privileges for themselves (although note that these privileges were lesser than those afforded to Christians who served the state as e.g. knights) and some modicum of protection by the state for their coreligionists.
(Because of this requirement for paternalistic protection, any loss of central power for a state was almost always a disaster for Jews; petty warlords certainly did need their moneylending services, but they were much less adept at providing protection in return.)
Arendt reckons that this may have made the Jews of Europe doubly despised, first via the general Christian antipathy that was dominant at the time and second because it meant that any who had reason to hate the state would also hate the Jews, because of their highly visible relationship with it.
That the state had mostly upheld its end of the bargain in this deal led to the second strategy that backfired: the Jews were complacent with mere legal rights, despite their despised status. They thought that legal rights could save them from any of the consequences of being despised . In the modern era, the strength of this purely legal protection was first put to test in France, when the Dreyfus Affair erupted.
Captain Alfred Dreyfus was a French Jew who was wrongly convicted of treason in 1894. In 1896, new evidence came to light that showed he was innocent. The military suppressed this evidence and trumped up new charges against Dreyfus, but word leaked out and a scandal was quickly born.
It is said that while the affair was ongoing, nearly everyone in Europe had an opinion on it. Nominally, the Dreyfusards believed Dreyfus was innocent, while the anti-Dreyfusards believed he was guilty, but both positions quickly gained several ancillary beliefs. Dreyfusards became noted for their anti-clerical positions – including that “secret Rome” controlled much of global affairs . The anti-Dreyfusards became authoritarian, nationalistic, and fiercely anti-Semitic. They believed that “secret Judah” controlled everything.
I want to stress how little importance people ended up putting on Dreyfus. La Croix, a Catholic newspaper at one point stated: “it is no longer a question whether Dreyfus is innocent or guilty but only of who will win, the friends of the army or its foes” . It is impossible to explain how the discredited trial of a single military officer could lead to jack-booted thugs attacking intellectuals and crying for “death to the Jews!” without the understanding of the usefulness of antisemitism for whipping up the mob that this book engenders.
“The mob”, as distinct from “the people” is one of the key concepts in Origins of Totalitarianism. It’s Arendt’s most important example of the type of politics she despises and she returns to it again and again. She describes the mob as the “déclassé” and the “residue of all classes”; the mob are those people who are excluded from civil and economic opportunities by virtue of their education (or lack thereof), disposition, personality, or airs, and deeply resent this exclusion, to the point where they wish to destroy the society that excluded them.
Arendt claims that the representation of all classes within the mob makes it easy to mistake the mob as representative of the people in general. Since this argument can be used to disenfranchise basically any group seeking rights, Arendt suggests that the key difference between a mob and a genuine movement lies in what sort of demands the group makes. The people will demand to have their voices heard in government. The mob will demand a strong leader to fix everything (by ripping apart the society that has excluded them). In the case of the anti-Dreyfusards, these strong leaders enjoyed a symbiotic relationship with the mob; they were all recovering esthetics and nihilists and saw in the mob a “primitive and virile strength”, something they found admirable and exhilarating.
Remember that there already was a perception that the Jews secretly controlled everything and that this theory was politically useful because it justified an international ideology and allowed for a polarization of society around attacking a hated other. With respect to the mob, Arendt gives a third reason why this sort of conspiracy theory might be useful as a rallying cry: it helps explain why the déclassé of the mob have been cast out of and abandoned by society. It is much easier for them to believe that there is some worldwide conspiracy then that there is some fault of their own.
(I trust that anyone reading this in 2018 sees why I found Arendt’s description of the mob so frightening. In the margin of the passage where she introduces the mob, I have written “MAGA voters?”)
Against the mob (and its steadily escalating violence) stood Clemenceau (then a journalist), Émile Zola, and a small cadre of liberal and radical intellectuals and their supporters. Arendt says that what made their position unique is their support for purely abstract concepts, like justice. If the rallying call of the mob was “Death to the Jews”, then it seems as if the rallying call of those arrayed against it was fiat justitia ruat caelum, or perhaps the old battle-cry of the French First Republic: liberté, égalité, fraternité.
Ultimately, the appeals of the intellectuals convinced the socialists, if not in the primacy of justice, then that their class interests were served by marching against the anti-Dreyfusards. And so the workers took to the streets and the campaign of terror of the mob was ended.
There was of course rather a large difference between ending open violent antisemitism and actually acquitting Dreyfus. Here the good and great of French society, the delegates of the representative assembly, were barely split: all but one opposed a retrial. The fight around a retrial was to simmer (largely outside of the chambers of government) for three years, between 1897 and 1900. During this time, Dreyfusards used the courts and the press to try and sway public opinion and force the manner, while the anti-Dreyfusards, the Catholic priests, and the army tried to launch a coup d’état (though Arendt mocks that whole endeavour to the point where I think they never got very close to actually seizing power).
Notable were the reactions of Jews outside of Dreyfus’s immediate family to the case. Arendt contends that they made such a deal of legal equality, that they believed that if Dreyfus had been found guilty in a court of law, he must be guilty or that if the verdict was false, it was just a legal error, not an attack against them as a people. Arendt is obviously speaking with the benefit of hindsight here; I wonder how obvious any of this could have been to a people used to discrimination, both social and official.
There was a passage here that felt particularly relevant even now. Arendt suggests that society at the time saw every Jew, however penniless as a potential Rothschild (and therefore unworthy of any protection or “special treatment”). Clemenceau, she says, was one of the few true friends the Jews had because he saw them, all of them, even the Rothschilds with their vast fortune, as members of one of Europe’s oppressed people. To this day, despite the Holocaust, the Jew quotas, the cries of “none is too many” by now-dead bureaucrats or “the Jews will not replace us” by a tiki-torch wielding mob today, and the high rate of antisemitic hate crime, it is hard to find many people who will stand up and say that Jews face systematic prejudice and oppression.
The end of the affair reversed Marx’s famous maxim of history, in that it was the farce that presaged tragedy. Appeals to justice failed. The popular hatred of the aristocracy and the bourgeoisie failed. Zola and Clemenceau’s appeals all failed. But a threatened boycott of the Paris Exposition of 1900 succeeded. The anti-Dreyfusard government was censured, and Dreyfus was pardoned .
It was only much later, via an illegal retrial, that an exoneration was achieved.
The fallout of the trials was far reaching. Rights for Catholics, including Catholic schools, were curtailed. Arendt bitterly remarks that this was a failure of politics; instead of the simple republican principle of equality for all, there was “one exception for Jews, and another which threatened the freedom of conscience for Catholics”.
The trial of Dreyfus occupies more space than any other single incident in the volume on antisemitism. It allows Arendt to introduce the idea of “the mob” and the conspiracy (here Jewish domination) that motivates it. But its centrality is mostly, I think, because Arendt views it as the only harbinger of what was to come; the first incident of true violent antisemitism (remember, Arendt views this as in a separate class from the ubiquitous Christian Jew hatred which characterized pre-modern Europe), as opposed to the “mere” social discrimination Jews faced in European society.
I was shocked by how modern this social discrimination was. Jews were consistently exoticized (some of which must have come from fascination with their “vice”, as Arendt defined it). She recounts a review of a Jewish poet from the 19th century, that laments at the normality of the poetry (the reviewer expected something other from normal human poetry).
This exoticism was both a social curse and a key. It was a curse in that it always set Jews apart and that the spectre of social discrimination, of being so exotic that one became the other, was always present. It was a key in that for certain “exceptional” Jews, Jews that society agreed “weren’t like the others”, the fact of their exception could lead to social climbing. These “exceptional” Jews were alternatively welcomed by, showed off almost like exhibits, or excluded by high society, depending on their rarity, their own merits, and the strength of antisemitic sentiments.
As Jews became more normalized in European society, it became harder and harder to be the exception, while the shadow of social discrimination never lifted. Therefore, increasing normalization led to less acceptance in society, not more. Arendt disagrees with the (she claims) commonly held notion that it was primarily Christian antipathy that kept Jewish communities from dispersion and assimilation in the Middle Ages, but thinks that social discrimination became an important limit on dispersion just as assimilation became possible.
This made me wonder about the nature of assimilation and safety. It’s certainly true that the Irish in America are now obviously safe beyond the reach of any Know-Nothing. But it’s clear that they had to give up something to attain that safety. For assimilated Irish (or assimilated Scots or Germans, the stock of my family), there is little of the old culture and none of the old language left.
The central political question of a multi-ethnic democracy might be “how can we ensure safety, without the need for total assimilation?” And certainly, I do not wish to suggest that assimilation is the surest of safeties. It did not save the assimilated German Jews. I wonder if there is in fact a critically dangerous period during very act of assimilation, where a people is vulnerable and dispersed just as social backlash against their increasing rights reaches a fever pitch.
Here, Arendt has no answers for me.
There might be those who question whether reading about antisemitism from Hannah Arendt is like letting the fox guard the chicken coup; One of the most enduring controversies of Hannah Arendt’s life was her alleged antisemitism. Her romance with the noted philosopher and Nazi Heidegger (although note that their relationship preceded his conversion to Nazism and she did not have contact with him while he was a Nazi), her criticism of Jewish leaders in her coverage of the Eichmann trial, and her criticism of historical Jewish attempts to find safety in this section of The Origins of Totalitarianism are the evidence most often given in support of her supposed “self-hating” nature (as she was herself a Jew, and moreover a German Jew who fled the Nazis).
I think it is certainly true that she was an often-harsh critic of some things that Jews had done and that she wrote perhaps unfairly and with the benefit of hindsight. I think it is also undeniable that she was biased against certain Jews (her cringe-worthy and horribly racist description of Ostjuden and middle-eastern Jews opens Eichmann in Jerusalem).
But I think the evidence for her “antisemitism” is often overstated and mainly comes from misreading her works; I mentioned above just how careful a reader must be if they don’t want to be tripped up by her redefinitions of common words. The criticism that she “defended” Eichmann as “just following orders” and not really culpable can be dispelled simply by reading Eichmann in Jerusalem, a book which ends with her calling for his death and features a section where she systematically dismantles the argument he was just following orders.
On the other side of the equation, we have her pioneering work on antisemitism which is fiercely critical of anti-Semites and all who enabled them, her work to resettle Jews in Israel, her work in Eichmann in Jerusalem systematically documenting the extent of the Holocaust, and her fierce and rousing defense of the holocaust as a crime against humanity perpetrated on the body of the Jewish people (from her biopic: “because Jews are human, the very status the Nazis tried to deny them”).
She was assuredly arrogant. She assuredly burned bridges. A set of lecture notes she once prepared said:
For conscience to work: either a very strong religious belief—extremely rare. Or: pride, even arrogance. If you say to yourself in such matters: who am I to judge?—you are already lost.”
There is very little positive said in Part 1 of The Origins of Totalitarianism, which is to say that it doesn’t give us very much idea of what we can do to prevent totalitarianism and barbarism. But if we could ask Hannah Arendt, the great political theorist of the 20th century, the lost child of the French Revolution, she might say something like: “find your principles and stick to them; think about what is the right thing and do it; defend liberty always.”
Since Socrates and Plato, we usually call thinking to be engaged in that silent dialogue between me and myself. In refusing to be a person Eichmann utterly surrendered that single most defining human quality: that of being able to think. And consequently, he was no longer capable of making moral judgements. This inability to think created the possibility for many ordinary men to commit evil deeds on a gigantic scale, the likes of which have never been seen before.
It is true, I have considered these questions in a philosophical way. The manifestation of the wind of thought is not knowledge, but the ability to tell right from wrong, beautiful from ugly. And I hope that thinking gives people the strength to prevent catastrophes in these rare moments when the chips are down.
Increasingly, it seems like this might be one of those moments where the chips could be down. I shivered when I read some of Arendt’s descriptions of the mob, because I knew it wasn’t a hypothetical. I’ve seen it, on social media and at rallies. With tiki-torches and with weapons, I have seen the mob. And I hope reading this book and others like it and thinking will give me the strength to act to prevent catastrophe if I am ever so unlucky to have to.
 I want to make it clear that Hannah Arendt (and I) don’t believe the old canard about Jews controlling the world. She specifically mentions this lie being baffling, because when it was started, it was true that a rather small group of European statesmen essentially did control the world. But none of those statesmen were Jewish and all of them were so at cross-purposes that no coordination occurred.
When Arendt talks about internationalism in the European Jewish community, she is simply saying that there were many ties of family and friendship among Jews of different countries, which meant that privileged Jews were more likely to have close associates in countries other than the one in which they resided, even compared to similarly privileged gentiles. ^
 “Privileges” here being “were treated the same as gentiles and weren’t discriminated against legally”. I am reminded forcefully of David Schraub’s excellent essay about the recent tendency to equate the Holocaust and occupation of the west bank. I think Arendt unearths reasonable evidence for the claim David makes, that “gentiles believed that superiority over Jews was part of the deal that they were always offered”, such that loss of that superiority feels like a special privilege for Jews. ^
 Given that Christian and secular hatred of Jews was without reason, it’s unclear what they could have done to be less despised. ^
 There have been several times in history when its looked like conspiracies against Catholics would reach the same fever pitch as those against Jews, but this has never quite materialized. Catholics in North America are still more likely to face hate crimes than other Christian denominations, but the number and severity of these crimes pale in comparison to the crimes conducted against Jews.
Even if the internationalism of the Catholic Church and its occasional use of the confessional for political gain (although the latter has not been seen in recent times), make it an appealing target for conspiracy theories, it offers much less in terms of racial theories. In Germany at least, racial theories would have been much less effective if the target was Catholicism, since essentially all Germans had been Catholic before the reformation and associated wars of religion. That said, Christianity arose from Judaism, so I’m not sure if the targeting of Jews rather than Catholics can be explained by religious lineage alone. ^
 Zola hated the pardon. He said all it accomplished was “to lump together in a single stinking pardon men of honour with the hoodlums”. ^
 This was very important to Arendt, because she needed to show the totality of moral collapse in “respectable” German society in order to prove her point about the banality of evil. She recounts that Eichmann actually ignored Himmler’s orders to stop killing Jews, because within the context of the third Reich, they were unlawful orders that went against the values of the state. She then goes on to present distressing evidence about just how far this moral rot extended and just how easy it was for Hitler to cultivate it. ^
Epistemic Status: Full of sweeping generalizations because I don’t want to make it 10x longer by properly unpacking all the underlying complexity.
[9 minute read]
In 2006, Dr. Atul Gawande wrote an article in The New Yorker about maternal care entitled “How Childbirth Went Industrial“. It’s an excellent piece from an author who consistently produces excellent pieces. In it, Gawande charts the rise of the C-section, from its origin as technique so dangerous it was considered tantamount to murder (and consequently banned on living mothers), to its current place as one of the most common surgical procedures carried out in North American hospitals.
The C-section – and epidurals and induced labour – have become so common because obstetrics has become ruthlessly focused on maximizing the Apgar score of newborns. Along the way, the field ditched forceps (possibly better for the mother yet tricky to use or teach), a range of maneuvers for manually freeing trapped babies (likewise difficult), and general anesthetic (genuinely bad for infants, or at least for the Apgar scores of infants).
The C-section has taken the place of much of the specialized knowledge of obstetrics of old, not the least because it is easy to teach and easy for even relatively less skilled doctors to get right. When Gawande wrote the article, there was debate about offering women in their 39th week of pregnancy C-sections as an alternative to waiting for labour. Based on the stats, this hasn’t quite come to pass, but C-sections have become slightly more prevalent since the article was written.
I noticed two laments in the piece. First, Gawande wonders at the consequences of such an essential aspect of the human experience being increasingly (and based off of the studies that show forceps are just as good as C-sections, arguably unnecessarily) medicalized. Second, there’s a sense throughout the article that difficult and hard-won knowledge is being lost.
The question facing obstetrics was this: Is medicine a craft or an industry? If medicine is a craft, then you focus on teaching obstetricians to acquire a set of artisanal skills—the Woods corkscrew maneuver for the baby with a shoulder stuck, the Lovset maneuver for the breech baby, the feel of a forceps for a baby whose head is too big. You do research to find new techniques. You accept that things will not always work out in everyone’s hands.
But if medicine is an industry, responsible for the safest possible delivery of millions of babies each year, then the focus shifts. You seek reliability. You begin to wonder whether forty-two thousand obstetricians in the U.S. could really master all these techniques. You notice the steady reports of terrible forceps injuries to babies and mothers, despite the training that clinicians have received. After Apgar, obstetricians decided that they needed a simpler, more predictable way to intervene when a laboring mother ran into trouble. They found it in the Cesarean section.
Medicine would not be the first industry to industrialize. The quasi-mythical King Ludd that gave us the phrase “Luddite” was said to be a weaver, put out of business by the improved mechanical knitting machines. English programs turn out thousands of writers every year, all with an excellent technical command of the English language, but most with none of the emotive power of Gawande. Following the rules is good enough when you’re writing for a corporation that fears to offend, or for technical clarity. But the best writers don’t just know how to follow the rules. They know how and when to break them.
If Gawande was a student of military history, he’d have another metaphor for what is happening to medicine: warriors are being replaced by soldiers.
If you ever find yourself in possession of a spare hour and feel like being lectured breathlessly by a wide-eyed enthusiast, find your local military history buff (you can identify them by their collection of swords or antique guns) and ask them whether there’s any difference between soldiers and warriors.
You can go do this now, or I can fill in, having given this lecture many times myself.
Imagine your favourite (or least favourite) empire from history. You don’t get yourself an empire by collecting bottle caps. To create one, you need some kind of army. To staff your army, you have two options. Warriors, or soldiers.
(Of course, this choice isn’t made just by empires. Their neighbours must necessarily face the same conundrum.)
Warriors are the heroes of movies. They were almost always the product of training that starts at a young age and more often than not were members a special caste. Think medieval European Knights, Japanese Samurai, or the Hashashin fida’i. Warriors were notable for their eponymous mastery of war. A knight was expected to understand strategy and tactics, riding, shooting, fighting (both on foot and mounted), and wrestling. Warriors wanted to live up to their warrior ethos, which normally emphasized certain virtues, like courage and mercy (to other warriors, not to any common peasant drafted to fight them).
Soldiers were whichever conscripts or volunteers someone could get into a reasonable standard of military order. They knew only what they needed to complete their duties: perhaps one or two simple weapons, how to march in formation, how to cook, and how to repair some of their equipment . Soldiers just wanted to make it through the next battle alive. In service to this, they were often brutally efficient in everything they did. Fighting wasn’t an art to them – it was simple butchery and the simpler and quicker the better. Classic examples of soldiers are the Roman Legionaries, Greek Hoplites, and Napoleon’s Grande Armée.
The techniques that soldiers learned were simple because they needed to be easy to teach to ignorant peasants on a mass scale in a short time. Warriors had their whole childhood for elaborate training.
(Or at least, that’s the standard line. In practice, things were never quite as clear cut as that – veteran soldiers might have been as skilled as any warrior, for example. The general point remains though; one on one, you would always have bet on a warrior over a soldier.)
But when you talk about armies, a funny thing happens. Soldiers dominated . Individually, they might have been kind of crap at what they did. Taken as a whole though, they were well-coordinated. They looked out for each other. They fought as a team. They didn’t foolishly break ranks, or charge headlong into the enemy. When Germanic warriors came up against Roman soldiers, they were efficiently butchered. The Germans went into battle looking for honour and perhaps a glorious death. The Romans happily gave them the latter and so lived (mostly) to collect their pensions. Whichever empire you thought about above almost certainly employed soldiers, not warriors.
It turns out that discipline and common purpose have counted for rather a lot more in military history than simple strength of arms. Of this particular point, I can think of no better example than the rebellion that followed the Meiji restoration. The few rebel samurai, wonderfully trained and unholy terrors in single combat were easily slaughtered by the Imperial conscripts, who knew little more than which side of a musket to point at the enemy.
The very fact that the samurai didn’t embrace the firing line is a point against them. Their warrior code, which esteemed individual skill, left them no room to adopt this devastating new technology. And no one could command them to take it up, because they were mostly prima donnas where their honour was concerned.
I don’t want to be too hard on warriors. They were actually an efficient solution to the problem of national defence if a population was small and largely agrarian, lacked political cohesion or logistical ability, or was otherwise incapable of supporting a large army. Under these circumstances, polities could not afford to keep a large population under arms at all times. This gave them several choices: they could rely on temporary levies, who would be largely untrained. They could have a large professional army that paid for itself largely through raiding, or they could have a small, elite cadre of professional warriors.
All of these strategies had disadvantages. Levies tended to have very brittle morale, and calling up a large proportion of a population makes even a successfully prosecuted war economically devastating. Raiding tends to make your neighbours really hate you, leading to more conflicts. It can also be very bad for discipline and can backfire on your own population in lean times. Professional warriors will always be dwarfed in numbers by opponents using any other strategy.
Historically, it was never as simple as solely using just one strategy (e.g. European knights were augmented with and eventually supplanted by temporary levies), but there was a clear lean towards one strategy or another in most resource-limited historical polities. It took complex cultural technology and a well-differentiated economy to support a large force of full time soldiers and wherever these pre-conditions were lacking, you just had to make do with what you could get .
When conditions suddenly call for a struggle – whether that struggle is against a foreign adversary, to boost profits, or to cure disease, it is useful to look at how many societal resources are thrown at the fight. When resources are scarce, we should expect to see a few brilliant generalists, or many poorly trained conscripts. When resources are thick on the ground, the amount that can be spent on brilliant people is quickly saturated and the benefits of training your conscripts quickly accrue. From one direction or another, you’ll approach the concept of soldiers.
Doctors as soldiers, not as warriors is the concept Gawande is brushing up against in his essay. These new doctors will be more standardized, with less room for individual brilliance, but more affordances for working well in teams. The prima donnas will be banished (as they aren’t good team players, even when they’re brilliant). Dr. Gregory House may have been the model doctor in the Victorian Age, or maybe even in the fifties. But I doubt any hospital would want him now. It may be that this standardization is just the thing we need to overcome persistent medical errors, improve outcomes across the board, and make populations healthier. But I can sympathize with the position that it might be causing us to lose something beautiful.
In software development, where I work, a similar trend can be observed. Start-ups aggressively court ambitious generalists, for whom freedom to build things their way is more important than market rate compensation and is a better incentive than even the lottery that is stock-options. At start-ups, you’re likely to see languages that are “fun” to work with, often dynamically typed, even though these languages are often considered less inherently comprehensible than their more “enterprise-friendly” statically typed brethren.
It’s with languages like Java (or its Microsoft clone, C#) and C++ that companies like Google and Amazon build the underlying infrastructure that powers large tracts of the internet. Among the big pure software companies, Facebook is the odd one out for using PHP (and this choice required them to rewrite the code underlying the language from scratch to make it performant enough for their large load).
It’s also at larger companies where team work, design documents, and comprehensibility start to be very important (although there’s room for super-stars at all of the big “tech” companies still; it’s only in companies more removed from tech and therefore outside a lot of the competition for top talent where being a good team player and writing comprehensible code might top brilliance as a qualifier). This isn’t to say that no one hiring for top talent appreciates things like good documentation, or comprehensibility. Merely that it is easy for a culture that esteems individual brilliance to ignore these things are a mark of competence.
Here the logic goes that anyone smart enough for the job will be smart enough to untangle the code of their predecessors. As anyone who’s been involved in the untangling can tell you, there’s a big difference between “smart enough to untangle this mess” and “inclined to wade through this genius’s spaghetti code to get to the part that needs fixing”.
No doubt there exist countless other examples in fields I know nothing about.
The point of gathering all these examples and shoving them into my metaphor is this: I think there exist two important transitions that can occur when a society needs to focus a lot of energy on a problem. The transition from conscripts to soldiers isn’t very interesting, as it’s basically the outcome of a process of continuous improvement.
But the transition from warriors to soldiers is. It’s amazing that we can often get better results by replacing a few highly skilled generalists who apply a lot of hard fought decision making, with a veritable army of less well trained, but highly regimented and organized specialists. It’s a powerful testament to the usefulness of group intelligence. Of course, sometimes (e.g. Google, or the Mongols) you get both, but these are rare happy accidents.
Being able to understand where this transition is occurring helps you understand where we’re putting effort. Understanding when it’s happening within your own sphere of influence can help you weather it.
Also note that this transition doesn’t only go in one direction. As manufacturing becomes less and less prevalent in North America, we may return to the distant past, when manufacturing stuff was only undertaken by very skilled artisans making unique objects.
 Note the past tense throughout much of this essay; when I speak about soldiers and warriors, I’m referring only to times before the 1900s. I know comparatively little about how modern armies are set up. ^
 Best of all were the Mongols, who combined the lifelong training of warriors with the discipline and organization of soldiers. When Mongols clashed with European knights in Hungary, their “dishonourable” tactics (feints, followed by feigned retreats and skirmishing) easily took the day. This was all possible through a system of signal flags that allowed Subutai to command the whole battle from a promontory. European leaders were expected to show their bravery by being in the thick of fighting, which gave them no overall control over their lines. ^
 Historically, professional armies with good logistical support could somewhat pay for themselves by expanding an empire, which brought in booty and slaves. This is distinct from raiding (which does not seek to incorporate other territories) and has its own disadvantages (rebellion, over-extension, corruption, massive unemployment among unskilled labourers, etc.). ^
Hirohito and the Making of Modern Japan is the second book I’ve read about World War II and culpability. I apparently just can’t resist the urge to write essays after books like this, so here we go again. Since so much of what I got out of this book was spurred by the history it presented, I’m going to try and intersperse my thoughts with a condensed summary of it.
Aside from the prologue, which takes place just after Hirohito’s (arguably) extra-constitutional surrender, the book follows Hirohito’s life chronologically. Hirohito’s childhood was hardly idyllic. He spent most of it being educated. Meiji Era Japan drew heavily from Prussia and in Hirohito’s education, I saw an attempt to mold him into a Japanese Frederick the Great.
I think Dr. Bix is right to spend as much time on Hirohito’s childhood as he does. Lois McMaster Bujold once criticized authors who write characters that pop out of a box at 22, fully formed. It’s even more lamentable when historians do this.
Had Dr. Bix skipped this part, we’d have no explanation for why Hirohito failed so completely at demonstrating any moral fibre throughout the war. In order to understand Hirohito’s moral failings, we had to see the failings in Hirohito’s moral education. Dr. Bix does an excellent job here, showing how fatuous and sophistic the moral truths Hirohito was raised with were. His instructors lectured him on the moral and temporal superiority of the Imperial House over the people of Japan and the superiority of the people of Japan over the people of the world. Japan, Hirohito was taught, had to steward the rest of Asia towards prosperity – violently if need be.
For all that Hirohito might have been a pacifist personally, his education left him little room to be a pacifist as a monarch.
This certainly isn’t without precedent. The aforementioned Frederick the Great was known to complain about his “dog’s life” as a general. Frederick would have much preferred a life of music and poetry to one of war, but he felt that it was his duty to his country and his people to lead (and win wars).
Hirohito would have felt even more pressure than Frederick the Great, because he probably sincerely believed that it was up to him to save Asia. The explicitly racist immigration policies of western nations, their rampant colonialism, and their refusal to make racial non-discrimination a key plank of the League of Nations made it easy for Hirohito’s teachers to convince him that he (and through him, all of Japan) was responsible for protecting “the yellow race”.
It is unfortunate that Hirohito was raised to be an activist emperor, because as Dr. Bix points out, the world was pretty done with monarchs by the time Hirohito was born. Revolutions and First World War had led to the toppling of many of the major monarchies (like Russia, Austria-Hungary, and Germany). Those countries that still had monarchies heavily circumscribed the power of their monarchs. There were few countries left where monarchs both ruled and reigned. Yet this is what Hirohito’s teachers prepared him to do.
After an extensive education, Hirohito entered politics as the prince-regent for his ailing father, the Taisho Emperor. As regent, he attended military parades, performed some of the emperor’s religious duties, appointed prime ministers, and began to learn how Japanese politics worked.
There was a brief flourishing of (almost) true democracy based on party politics during the reign of the Taisho Emperor. Prime Ministers were picked by the emperor on the advice of the genrō, an extraconstitutional group of senior statesmen who directed politics after the Meiji Restoration (in 1868). The incapacity of Hirohito’s father meant that the genrō were free to choose whomever they wanted. Practically, this meant that cabinets were formed by the leader of the largest party in the Diet (the Japanese parliament). Unfortunately, this delicate democracy couldn’t survive the twin threats of an activist monarch and independent military.
The prime minister wasn’t the only power centre in the cabinet. The army and navy ministers had to be active duty officers, which gave the military an effective veto over cabinets – cabinets required these ministers to function, but the ministers couldn’t join the cabinet without orders from their service branch.
With an incompetent and sick emperor, the military had to negotiate with the civilian politicians – it could bring down a government, but couldn’t count on the genrō to appoint anyone better, limiting its bargaining power. When Hirohito ascended to the regency, the army began to go to him. By convincing Hirohito or his retinue to back this candidate for prime minister or that one, the military gained the ability to remove cabinets and replace them with those more to their liking.
This was possible because under Hirohito, consulting the genrō became a mere formality. In a parody of what was supposed to happen, Hirohito and his advisers would pick their candidate for prime minister and send him to Saionji, the only remaining genrō. Saionji always approved their candidates, even when he had reservations. This was good for the court group, because it allowed them to maintain the fiction that Hirohito only acted on advice and never made decisions of his own.
As regent, Hirohito made few decisions of his own, but the court group (comprised of Hirohito and his advisors) began laying the groundwork to hold real power when he ascended to the throne. For Hirohito, his education left him little other choice. He had been born and raised to be an active emperor, not a mere figurehead. For his entourage, increasing Hirohito’s influence increased their own.
I’m not sure which was more powerful: Hirohito or his advisors? Both had reasons for trusting the military. Hirohito’s education led him to view the military as a stabilizing and protective force, while his advisors tended to be nationalists who saw a large and powerful military as a pre-requisite for expansion. Regardless of who exactly controlled it, the court group frequently sided with the military, which made the military into a formidable political force.
Requiring active duty military officers in the cabinet probably seemed like a good idea when the Meiji Constitution was promulgated, but in retrospect, it was terrible. I’m in favour of Frank Herbert’s definition of control: “The people who can destroy a thing, they control it.” In this sense, the military could often control the government. The instability this wrought on Japan’s cabinet system serves as a reminder of the power of vetoes in government.
In 1926, the Taisho emperor died. Hirohito ascended to throne. His era name was Shōwa – enlightened peace.
As might be expected, the court group didn’t wait long after Hirohito’s ascension to the throne to begin actively meddling with the government. Shortly after becoming emperor, Hirohito leaned on the prime minister to commute the death sentence of a married couple who allegedly planned to assassinate him. For all that this was a benevolent action, it wreaked political havoc, with the prime minister attacked in the Diet for falling to show proper concern for the safety of the emperor.
Because the prime minister was honour bound to protect the image of Hirohito as a constitutional, non-interventionist monarch, he was left defenseless before his political foes. He could not claim to be acting according to Hirohito’s will while Hirohito was embracing the fiction that he had no will except that of his prime minister and cabinet. This closed off the one effective avenue of defense he might have had. The Diet’s extreme response to clemency was but a portent of what was to come.
Over the first decade of Hirohito’s reign, Japanese politics became increasingly reactionary and dominated by the army. At the same time, Hirohito’s court group leveraged the instability and high turnover elsewhere in the government to become increasingly powerful. For ordinary Japanese, being a liberal or a communist became increasingly unpleasant. “Peace Preservation Laws” criminalized republicanism, anarchism, communism, or any other attempt to change the national fabric or structure, the kokutai – a word that quickly became heavily loaded.
In the early 1930s, political criticism increasingly revolved around the kokutai, as the Diet members realized they could score points with Hirohito and his entourage by claiming to defend it better than their opponents could. The early 1930s also saw the Manchurian Incident, a false flag attack perpetrated by Japanese soldiers to give a casus belli for invading Manchuria.
Despite opposition from both Hirohito and the Prime Minister, factions in the army managed to leverage the incident into a full-scale invasion, causing a war in all but name with China. Once the plotters demonstrated that they could expand Hirohito’s empire, he withdrew his opposition. Punishments, when there were any, were light and conspirators were much more likely to receive medals that any real reprimand. Dr. Bix believes this sent a clear message – the emperor would tolerate insubordination, as long as it produced results.
After the Manchurian Incident (which was never acknowledged as a war by Japan) and the occupation of Manchuria, Japan set up a client kingdom and ruled Manchuria through a puppet government. For several years, the situation on the border with China was stable, in spite of occasional border clashes.
This stability wasn’t to last. In 1937, there was another incident, the Marco Polo Bridge Incident.
When an unplanned exchange of fire between Chinese and Japanese troops broke out in Beijing (then Peking), some in the Japanese high command decided the time was ripe for an invasion of China proper. Dr. Bix says that Hirohito was reluctant to sanction this invasion (over fears of the Soviet Union), but eventually gave his blessing.
Japan was constantly at war for the next eight years. Over the course of the war, Dr. Bix identified several periods where Hirohito actively pushed his generals and admirals towards certain outcomes, and many more where Hirohito disagreed with them, but ultimately did nothing.
I often felt like Dr. Bix was trying to have things both ways. He wanted me to believe that Hirohito was morally deficient and unable to put his foot down when he could have stood up for his principles and he wanted me to believe that Hirohito was an activist emperor, able to get what he wanted. This of course ignores a simpler explanation. What if Hirohito was mostly powerless, a mere figurehead?
Here’s an example of Dr. Bix accusing Hirohito of doing nothing (without adequate proof that he could have done anything):
When Yonai failed to act on the long-pending issue of a German alliance, the army brought down his cabinet and Hirohito did nothing to prevent it. (Page 357)
On the other hand, we have (in Hirohito’s own words) an admission that Hirohito had some say in military policy:
Contrary to the views of the Army and Navy General Staffs, I agreed to the showdown battle of Leyte, thinking that if we attacked at Leyte and America flinched, then we would probably be able to find room to negotiate. (Page 481)
I really wish that Dr. Bix had grappled with this conflict more and given me much more proof that Hirohito actually had the all the power that Dr. Bix believes he did. It certainly seems that by Hirohito’s own admission, he was not merely a figurehead. Unfortunately for the thesis of the book, it’s a far leap from “not merely a figurehead” to “regularly guided the whole course of the war” and Dr. Bix never quite furnishes evidence for the latter view.
I was convinced that Hirohito (along with several other factions) acted to delay the wartime surrender of Japan. His reasoning for this was the same as his reasoning for the Battle of Leyte. He believed that if Japan could win one big victory, they could negotiate an end to the war and avoid occupation – and the risk to the emperor system that occupation would entail. When this became impossible, Hirohito pinned all his hopes on the Soviet Union, erroneously believing that they would intercede on Japan’s behalf and help Japan negotiate peace. For all that the atomic bombings loomed large in the public statement of surrender, it is likely that behind the scenes, the Soviet invasion played a large role.
Leaving aside for a minute the question of which interpretation is true, if Hirohito or a clique including him wielded much of the power of the state, he (or they) also suffered from one of the common downfalls of rule by one man. By Dr. Bix’s account, they were frequently controlled by controlling the information they received. We see this in response to the Hull note, an pre-war American diplomatic communique that outlined what Japan would have to do before America would resume oil exports.
At the Imperial Conference on December 1, 1941, Foreign Minister Tōgō misled the assembled senior statesmen, generals and admirals. He told them that America demanded Japan give up Manchuria, which was a red line for the assembled leaders. Based on this information, the group (including Hirohito) assented to war. Here’s a quote from the journal of Privy Council President Yoshimichi Hara:
If we were to give in [to the United States], then we would not only give up the fruits of the Sino-Japanese War and the Russo-Japanese War, but also abandon the results of the Manchurian Incident. There is no way we could endure this… [I]t is clear that the existence of our empire is threatened, that the great achievement of the emperor Meiji would all come to naught, and that there is nothing else we can do. (Page 432)
The problem with all this is that Hull cared nothing for Manchuria, probably didn’t even consider it part of China, and would likely have been quite happy to let Japan keep it. By this point, the Japanese conquest of Manchuria had been a done deal for a decade and the world had basically given up on it being returned to China. Hull did want Japan to withdrawn from French Indochina (present day Vietnam) and China. Both of these demands were unacceptable to many of the more hawkish Japanese leaders, but not necessarily to the “moderates”.
Foreign Minister Tōgō’s lie about Manchuria was required to convince the “moderates” to give their blessing to war.
A word on Japanese “moderates”. Dr. Bix is repeatedly scornful of the term and I can’t help feeling sympathetic to his point of view. He believes that many of the moderates were only moderate by the standards of the far-right extremists and terrorists who surrounded them. It was quite possible to have an international reputation as a moderate in one of the pre-war cabinets and believe that Japan had a right to occupy Chinese territory seized without even a declaration of war.
I don’t think western scholarship has necessarily caught up here. On Wikipedia, Privy Council President Hara is described as “always reluctant to use military force… he protested against the outbreak of the Pacific war at [the Imperial Conference of December 1]”. I would like to gather a random sample of people and see if they believe that the journal entry above represents protesting against war. If they do, I will print off this blog post and eat it.
Manipulation of information played a role in Japan’s wartime surrender as well. Dr. Bix recounts how Vice Foreign Minister Matsumoto Shinichi presented Hirohito with a translation of the American demands that replaced one key phrase. The English text of the demands read: “the authority of the Emperor… to rule the state shall be subject to the Supreme Commander of the Allied Powers”. In the translation, Shinichi replaced “shall be subject to” with “shall be circumscribed by”.
Hirohito, who (in Dr. Bix’s estimation) acted always to preserve his place as emperor accepted this (modified) demand.
Many accounts of World War II assume the civilian members of the Japanese cabinet were largely powerless. Here we see the cabinet shaping two momentous decisions (war and peace). They were able to do this because they controlled the flow of information to the military and the emperor. Hirohito and the military didn’t have their own diplomats and couldn’t look over diplomatic cables. For information from the rest of the world, they were entirely at the mercy of the foreign services.
One man rule can give the impression of a unified elite. Look behind the curtain though and you’ll always find factions. Deprived of legitimate means of conflict (e.g. contesting elections), factions will find ways to try and check each other’s influence. Here, as is often the case, that checking came via controlling the flow of information. This sort of conflict-via-information has real implications in current politics, especially if Donald Trump tries to consolidate more power in himself.
But how was it that such a small change in the demand could be so important? Dr. Bix theorized that Hirohito’s primary goal was always preserving the power of the monarchy. He chose foreign war because he felt it was the only thing capable of preventing domestic dissent. The far-right terrorism of the 1930s was therefore successful; it compelled the government to fight foreign wars to assuage it.
In this regard, the atomic bombs were actually a godsend to the Japanese leadership. They made it clear that Japan was powerless to resist the American advance and gave the leadership a face-saving reason to end the war. I would say this is conjecture, but several members of the court clique and military leadership actually wrote in their diaries that the bombs were “good luck” or the like. Here’s former Prime Minister Yonai:
I think the term is perhaps inappropriate, but the atomic bombs and the Soviet entry into the war are, in a sense, gifts from the gods [tenyu, also “heaven-sent blessings”]. This way we don’t have to say that we quit the war because of domestic circumstances. I’ve long been advocating control of our crisis, but neither from fear of an enemy attack nor because of the atomic bombs and the Soviet entry into the war. The main reason is my anxiety over the domestic situation. So, it is rather fortunate that we can now control matters without revealing the domestic situation. (Page 509)
Regardless of why exactly it came about, the end of the war brought with it the problem of trying war criminals. Dr. Bix alleges that there was a large-scale conspiracy amongst Japan’s civilian and military leadership to hide all evidence of Hirohito’s war responsibility, a conspiracy aided and abetted by General Douglas McArthur.
The general was supreme commander of the allied occupation forces and had broad powers to govern Japan as he saw fit. Dr. Bix believes that in Hirohito, McArthur saw a symbol he could use to govern more effectively. I’m not sure if I was entirely convinced of a conspiracy – a very good conspiracy leaves the same evidence as no conspiracy at all – but it is undeniable that the defenses of the “Class A” war criminals (the civilian and military leadership charged with crimes against peace) were different from the defenses offered at Nuremburg, in a way that was both curious and most convenient for Hirohito.
Both sets of war criminals (in Tokyo and Nuremburg) tried to deny the legitimacy of “crimes against the peace” and claim their trials were just victor’s justice. But notably absent from all of the trials of Japanese leaders was the defense of “just following orders” that was so emblematic of the Nazis tried at Nuremburg. Unlike the Nazis, the Japanese criminals were quite happy to take responsibility. It was always them, never the emperor. I don’t think this is just a case of their leader having survived; I doubt the Nuremburg defendants would have been so loyal if Hitler had lived.
Of course, there is a potential parsimonious explanation for everyone having their stories straight. Hirohito could have been entirely innocent. Except, if Hirohito was so innocent, how can we explain the testimony Konoe made to one of his aides?
Fumimaro Konoe was the last prime minister before the Pearl Harbour attack and an opponent of war with the United States. He refused to take part in the (alleged) cover up. He was then investigated for war crimes and chose to kill himself. Of Hirohito, he said:
“Of course His Imperial Majesty is a pacifist and he wished to avoid war. When I told him that to initiate war was a mistake, he agreed. But the next day, he would tell me: ‘You were worried about it yesterday but you do not have to worry so much.’ Thus, gradually he began to lead to war. And the next time I met him, he leaned even more to war. I felt the Emperor was telling me: ‘My prime minister does not understand military matters. I know much more.’ In short, the Emperor had absorbed the view of the army and the navy high commands.” (Page 419)
Alas, this sort of damning testimony was mostly avoided at the war crimes trials. With Konoe dead and the rest of Japan’s civilian and military leadership prepared to do whatever it took to exonerate Hirohito, the emperor was safe. Hirohito was never indicted for war crimes, despite his role in authorizing the war and delaying surrender as he searched for a great victory.
Some of the judges were rather annoyed by the lack of indictment. The chief judge wrote: “no ruler can commit the crime of launching aggressive war and then validly claim to be excused for doing so because his life would otherwise have been in danger… It will remain that the men who advised the commission of a crime, if it be one, are in no worse position than the man who directs the crime be committed”.
This didn’t stop most of the judges from passing judgement on the criminals they did have access to. Some of the conspirators paid for their loyalty with their lives. The remainder were jailed. None of them spent much more than a decade in prison. By 1956, all of the “Class A” war criminals except the six who were executed and three who died in jail were pardoned.
The business and financial elite, two groups which profited immensely from the war got off free and clear. None of them were even charged. Dr. Bix suggests that General McArthur vetoed it. He had a country to run and couldn’t afford the disruption that would be caused if all of the business and financial elite were removed.
This leaves the Class B and Class C war criminals, the officers who were charged with more normal war crimes. Those officers who were tried in other countries were much more likely to face execution. Of the nearly 6,000 Class B and Class C war criminals charged outside of Japan, close to 1,000 were executed. A similar number were acquitted. Most of the remainder served limited criminal sentences.
Perhaps the greatest injustice of all was the fate of Unit 731. None of them were ever charged, despite carrying out bacteriological research on innocent civilians. They bought their freedom with research data the Americans coveted.
For all that their defenses differed from the Nuremburg criminals, the Japanese war criminals tried in Tokyo faced a similar fate. A few of them were executed, but most of them served sentences that belied the enormity of their crimes. Life imprisonments didn’t stick and pardons were forthcoming once the occupation ended. And as in Germany, some of the war criminals even ended up holding positions in government. Overall, the sentences gave the impression that in 1945, wars of aggression were much less morally troubling than bank robberies.
I had thought the difficulties Germany faced in denazification – and holding former Nazi’s accountable – were unique. This appears to be false. It seems to be very difficult to maintain the political will to keep war criminals behind bars after an occupation ends, as long as their crimes were not committed against their own people.
In light of this, I think it can be moral to execute war criminals. While I generally oppose the death penalty, this opposition is predicated on there being a viable alternative to execution for people who have flagrantly violated the social contract. Life imprisonment normally provides this, but I no longer believe that it can in the case of war criminals.
The Allies bear some of the blame for the clemency war criminals received. Japan’s constitution required them to seek approval from a majority of the nations that participated in the Tokyo trial. Ultimately, a majority of the eleven nations that were involved in the tribunal put improved ties with Japan over moral principles and allowed clemency to be granted. This suggests that even jailing war criminals outside their country of origin or requiring foreign consent for their pardon can be ineffective.
With both of these options removed, basic justice (and good incentive structures) seem to require all major war criminals to be executed. A rule of thumb is probably to execute any war criminal who would have otherwise be sentenced to twenty years or more of prison. It’s only these prisoners who stand to see their sentence substantially reduced in the inevitable round of pardons.
I also believe that convicted war criminals (as a general class) probably shouldn’t be trusted with the running of a country. To be convicted of war crimes proves that you are likely to flagrantly violate international norms. While people can change, past behaviour remains the best predictor of future behaviour. Therefore, it makes sense to try and remove any right war criminals might otherwise have to hold public office in a way that is extremely difficult to reverse. This could take the form of constitutional amendments that requires all victimized countries to consent to each individual war criminal that wishes to later hold public office, or other similarly difficult to circumvent mechanisms.
This is one area where the International Criminal Court (ICC) could prove its worth. If the ICC is able to deliver justice and avoid bowing to political pressure in any of its cases, then the obvious way of dealing with war criminals would be to send them to the ICC.
The section of the book that covers the war crimes trials and post-war Japan is called “The Unexamined Life”. I think the title is apt. There’s no evidence that Hirohito ever truly grappled with his role in the war, whatever it was. At one point, in response to a question about his war responsibility, Hirohito even said: “I can’t answer that question because I haven’t thoroughly studied the literature in this field”. This answer would be risible even if Hirohito were completely blameless. If there was anyone who knew how much responsibility Hirohito bore for the war, it was the man himself.
In the constitution promulgated by the occupying Americans, Hirohito became a constitutional monarch in truth. Dr. Bix reports that Hirohito was miffed to find that he could no longer appoint prime ministers and cabinets. He adjusted poorly to his lack of role and spent most of the fifties and sixties hoping that he could be made politically useful again. This never happened, although some conservative prime ministers did go to him for advice from time to time. His one consolation was the extra-constitutional military and intelligence briefings he received, but this was a far nod from the amount of information he received during the war.
Ultimately, the only punishment that Hirohito faced was his irrelevance. That is, I think, too small a price to pay for launching (or at the very least, approving) wars of aggression that killed millions of people.
The last section of the book also includes the only flaw I noticed: Dr. Bix cites a poll where 57% of the population (of Japan) thought Hirohito bore war responsibility or were unsure whether he did. Dr. Bix goes on to claim that this implies that Hirohito’s evasive answers were out of step with the opinion of the majority of the Japanese population. I think (although I can’t prove; the original source is Japanese) that this is probably obscuring the truth.
This shades into the larger issue of trust. How much should I trust Dr. Bix? He obviously knows a lot more about Hirohito than I do and he can speak and read Japanese (I cannot). This makes this book more authoritative than previous books by Americans that relied entirely on translations of Japanese scholarship, but it also makes verifying his sources more difficult.
On a whole, this has left me somewhat unsatisfied. I’m convinced that Hirohito was more than a harmless puppet leader. I’m also convinced he didn’t wield absolute power. By Dr. Bix’s own admission, he acted contrary to his own wants very often. For me, this doesn’t jibe with autocratic power. My best interpretation of Dr. Bix’s research is that Hirohito was an influential member of one organ of the Japanese state. He wielded significant but not total influence over national policy. I do not believe that Hirohito was as free to act as Dr. Bix claims he was.
I do believe Dr. Bix when he says that Hirohito’s role expanded as the war went on. If nothing else, he became the most experienced of all of Japan’s leaders at the same time as the myth of his divinity and benevolence became most entrenched. Furthermore, Hirohito and his retinue were most free to act when the army and navy were at loggerheads. This became more and more common after 1937.
Dr. Bix actually posits that these disagreements were the ultimate reason that Hirohito could grasp real power. The cabinet (which included civilian, army, and navy decision makers) was supposed to work by consensus. Where there were deep divisions, they would paper over them with vague statements and false consensus, without engaging in the give and take of negotiation that real consensus requires. Since everything was done in Hirohito’s name, he and the court group could twist the vague statements towards their preferred outcomes – all the while pretending Hirohito was a mere constitutional monarch promulgating decisions based on the advice of the cabinet.
This system was horribly inefficient and at least one person tried to reform it. Unfortunately, their “reform” would have led to a military dictatorship. Here’s a quote about the troubles facing one of the pre-war prime ministers:
“Right-wing extremists and terrorists repeatedly assailed him verbally, while the leading reformer in his own party, Mori, sought to break up the party system itself and ally with the military to create a new, more authoritarian political order.” (Page 247)
I’m used to seeing “reformers” only applied positively, but if you’re willing to look at reform as “the process of making the government run more effectively”, I suppose that military dictatorships are one type of reform. I think it’s good to be reminded that efficiency is not the only axis on which we should judge a government. It may be quite reasonable to oppose reforms that will streamline the government when those reforms come at the cost of other values, like fairness, transparency, and freedom of speech.
It’s my habit to try and draw lessons from the history I read. Because Dr. Bix’s book covers so troubled a time, I did not find it lacking in lessons. But I had hoped for something more than lessons from the past. I had hoped to know definitively how much of the fault for Japan’s role in World War II should lie at the feet of Hirohito.
Despite this being the whole purpose of the book, I was left disappointed. It is almost as if Dr. Bix let his indignation with Hirohito’s escape from any and all justice get the better of him. Hirohito and the Making of Modern Japan tried to pin almost every misdeed during Hirohito’s reign on the emperor personally. In overreaching, it left me unsure of how much of itself to believe. I cannot discount it entirely, but I also cannot accept wholesale.
It doesn’t help that Dr. Bix paints a portrait of the emperor so intimate as to humanize him. While Dr. Bix seems to want us to view Hirohito as evil, I could not help but see him as a flawed man following a flawed morality. As far as I can tell, Hirohito would have been happiest as a moderately successful marine biologist. But marine biology is not what was asked of him and unfortunately, he did what he saw as his duty.
Here I again wish to make a comparison with Eichmann in Jerusalem. Had Hirohito not been singularly poor at introspection, or had he not had “an inability to think, namely, to think from the standpoint of somebody else” (while Hannah Arendt said this about Adolf Eichmann, I think it applies equally well to Hirohito), Hirohito could have risen above the failings in his moral education and acted as a brake on Japanese militarism.
Hirohito did not do this. And because of his actions (and perhaps more importantly, his inaction), terrible things came to pass.
The possibility for individuals to do terrible things despite having no malice in their hearts is what caused Hannah Arendt to coin the phrase “the banality of evil”. Fifty years later, we still expect the worst deeds humans can commit to only come from the hands of monsters. There is certainly security in that assumption. When we believe terrible things can only be done deliberately and with malice, we allow ourselves to ignore the possibility that we may be involved in unjust systems or complicit in terrible deeds.
It’s only when we remember that terrible things require no malice, that one may do them even while being a normal person or while acting in accordance with the values they were raised with, that we can properly introspect about our own actions. It is vital that we all take the time to ask “are we the baddies?” and ensure that our ethical systems fail gracefully.
Obviously, Hirohito did none of this. That’s all on him. No matter how you cut the blame pie, Hirohito did nothing to stop the Rape of Nanjing, the attack on Pearl Harbour, the Bataan Death March, and the forced massed suicides of Okinawans. Hirohito demonstrated that he had the power to order a surrender. Yet he did not do this when the war was all but lost and Japanese cities were bombed daily. He delayed surrender time and again, hoping for some other option that would allow him to cling to whatever scraps of power he had.
For all that Dr. Bix failed to convince me that Hirohito was one of the primary architects of the war, he did convince me that Hirohito bore a large measure of responsibility. I agree that Hirohito should have been a Class A war criminal. I agree that Hirohito escaped all but the faintest touch of justice for his role in the war. And I agree that Hirohito’s escape from justice has made it more difficult for Japan to accept the guilt it should bear for its wars of aggression.
Yonatan Zunger has an article in Medium claiming that the immigration executive order from last Friday is the “trial balloon” for a planned Trump coup. I don’t think this is quite correct. While I no longer have much confidence that America will still be a democracy in 50 years, I don’t think Trump will be its first dictator.
I do think the first five points in Dr. Zunger’s analysis are fairly sound. I’m not sure if they are true, but they’re certainly plausible. It is true, for example, that it is unusual to file papers for re-election so quickly. Barack Obama didn’t file his re-election form until 2011. Whether this means that Trump will use campaign donations to enrich his family remains to be seen, but the necessary public disclosures of campaign expenses make this falsifiable. Give it a year and we’ll know.
Unfortunately, the 6th point is much more speculative. Dr. Zunger believes that it is likely that Trump received a large share in the Russian gas giant Rosneft in payment for winning the election and (presumably) lifting Russian sanctions in the future. Dr. Zunger relies on a recently announced and difficult to trace sale of 19.5% of Rosneft, which is close to the 19% claimed in the Steele papers (which should be the first red flag). But the AP article he links sheds some serious doubt on this claim. It makes it clear that it isn’t the whole 19.5%, €10 billion stake in Rosneft that has disappeared, only a “small” €2 billion portion of it. Between this contradiction and the inherent unreliability of the Steele papers, I’m disinclined to believe that this represents a real transfer of wealth from Russia to Trump .
This point, although relatively minor, represents an inflection point in Dr. Zunger’s post, where it shifts from insightful analysis to shaky speculation.
As Dr. Zunger goes into more detail on Trump’s supposed next step, incongruities pile up.
If Trump is planning a coup and building a parallel power structure, why did he pick General Mattis as his SecDef? The military is one of the most popular institutions in America. The military was more popular than the presidency, even when the relatively popular Obama was president. You better bet it’s more popular than Trump. This gives the military moral, as well as practical authority to stop any Trump coup. Given that there’s no way that Trump will be more popular with the soldiers and officers who actually make up the army than Gen. Mattis is, he’s in an excellent position to shut down any coup attempt cold.
Gen. Mattis could stop a coup, but it’s his character that suggests he would. He has a backbone made of solid steel and seems to be far more loyal to America than he is to the president. See as evidence his phone calls to NATO members and support for maintaining the Iran deal.
The DHS isn’t plausible as a parallel power structure. Sure, 45,000 employees sounds like a lot, until you realize that the total staff of the NYPD is almost 50,000. Even in a scenario where the army stays neutral, the DHS would be hard pressed to police New York, let alone the whole country.
I also don’t think preparation for a coup is the only reason to ignore court orders. In Canada, we saw the Prime Minister routinely oppose the courts, culminating with a nasty series of public barbs directed at the Chief Justice of the Supreme Court. This wasn’t a prelude to Mr. Harper trying to seize power. It was the natural result of a perennially besieged and unpopular head of government fighting to pass an agenda despite heavy opposition from most civil society groups. I would contend that the proper yardstick to measure Trump against here is FDR. If Trump goes beyond what FDR did, we’ll have cause to worry.
All this is to say, if Trump is planning a coup, he isn’t being very strategic about it. That said, if he found some way to ditch General Mattis for someone more compliant, I would take the possibility of a coup much more seriously.
Instead of viewing Trump as a Caesar-in-waiting, we should think of him as analogous to Gaius Marius. Marius never seized power, but he did violate basically every conventional norm of Roman government (he held an unprecedented 7 consulships and began the privatization of the legions). Gaius Marius made the rise of dictators almost inevitable, but he was not himself a dictator.
Like America, Rome in the 1st century BCE found itself overextended, governing and protecting a large network of tributary states and outright colonies. The Roman constitutional framework couldn’t really handle administration on this scale. While year long terms are a sensible way to run a city state, they don’t work with a continent-spanning empire.
In addition to the short institutional memory and lack of institutional expertise that strict term limits guaranteed, Rome ran up against a system of checks and balances that made it incredibly hard to get anything done .
Today, America is running up against an archaic system of checks and balances . America has fallen to “government by kluge“, a state of affairs that has seriously degraded output legitimacy. From Prof. Joseph Heath on Donald Trump:
In response to the impossibility of reform, the American system has slowly evolved into what Steven Teles calls a kludgeocracy. Rather than enacting reforms, people have found “work-arounds” to the existing system, ways of getting things done that twist the rules a bit, but that everyone accepts because it’s easier than trying to change the rules. (This is why, incidentally, those who hope that the “separation of powers” will constrain President Trump are kidding themselves – the separation of powers in the U.S. is severely degraded, as an accumulated effect of decades of “work arounds” or kludges that violate it.)
Because of this, the U.S. government suffers a massive shortfall in “output legitimacy,” in that it consistently fails to deliver anything like the levels of competent performance than people in wealthy, advanced societies expect from government. (Anyone who has ever dealt with the U.S. government knows that it is uniquely horrible experience, unlike anything suffered by citizens of other Western democracies.) Furthermore, because of the dysfunctional legislative branch, nothing ever gets “solved” to anyone’s satisfaction. All that Americans ever get is a slow accumulation of more kludges (e.g. the Affordable Care Act, the Clean Power Plan).
Most people, however, do not think institutionally. When they see bad performance from government, they blame the actors that they see readily at hand. And their response then is to send in new people, committed to changing things. For decades they’ve been doing this, and yet nothing ever changes. Why? Because the problems are institutional, outside the control of individual legislators. But how do people interpret this lack of change? Many come to the conclusion that the person they sent in to fix things got coopted, or wasn’t tough enough, or wasn’t up to the job. And so they send in someone tougher, more radical, more vociferous in his or her commitment to changing things. When that doesn’t work out, they send in someone even more radical.
A vote for Donald Trump is a natural end-point of this process.
For Rome, Marius was the end-point. He held more power, for longer, than anyone who came before. The crucial distinction between him and those who came after, however, was that he acquired this power through legitimate means. Still, in order to govern effectively, he was forced to apply more kluges to the already disintegrating Roman constitution. It couldn’t hold up.
The end result of Marius was Sulla, who tried to bring Rome back to its “old ways” and repair the damage to the constitution. Interestingly, he did this almost entirely through extra-constitutional means. His reforms failed, although not just because of how he did them. Sulla tried to remove the kluges from the underlying system, but the result was an even more unworkable system.
Sulla was followed by the Triumvirate, a private power sharing agreement that divided up the empire and allowed effective governance at the cost of the constitution. The triumvirate led to civil war and dictatorship. And a bureaucracy capable of running the empire.
Looking back at history, I see three ways forward for America:
It can slowly become an autocracy, which will break the gridlock in Washington at the cost of democracy.
It can abandon its role as the world’s hegemon, retreat to isolationism, and see if its government is capable of handling the strain of this reduced burden.
It can radically change its system of government. A parliamentary system (whether first past the post or mixed member proportional) based on the confidence of the house would probably prove much more responsive to the crises America faces.
I no longer believe in the great man theory of history. Instead, I’ve begun to see history as a series of feedback loops between people, institutions, and places. Geopolitical realities can exert as much pressure for change on institutions as people can.
If we didn’t have Trump this year, we’d have someone like him in four years or eight. The stresses on the American system of government are such that someone had to emerge as the “natural endpoint” of failed reform. But I don’t think it’s this person’s fate to become America’s first dictator. That part is reserved for a later actor and there is still hope that the role can be written out before they step onto the stage.
 I’m a Bayesian, so I’ll quite happily bet with anyone who believes otherwise. ^  For more information on the transition of Rome into a dictatorship and the forces of empire that drove that transformation, I recommend SPQR by Prof. Mary Beard. ^  I’m certainly not opposed to checks and balances, but they can end up doing more harm than good if they make the act of governing so difficult that they end up ignored. ^
I just finished reading SPQR, by Professor Mary Beard. As a history of Rome, it’s the opposite of what I expected. It spends little time on individual deeds; there is no great man history here. More shocking, there is very little military history. As part of an audience taught to expect the history of Rome to be synonymous with the history of its military, I was shocked.
This book is perhaps best understood as a conversation with Romans masquerading as a political and social history of Rome. Prof. Beard sums this up in her epilogue: “I no longer think, as I once naively did, that we have much to learn directly from the Romans… but I am more and more convinced that we have an enormous amount to learn – as much about ourselves as about the past – by engaging with the history of the Romans.”
Prof. Beard starts her history with the foundational myths of Rome: Romulus and Remus, the Rape of the Sabine Women, and the Seven Kings. She looks at themes of these myths and turns the speculations of ancient historians on its head. Rome was not beset by conflicts between powerful men because of a lingering proclivity for fratricide inherent to the successors of Romulus. The story of Romulus resonated and was passed down because Rome was beset by conflicts with powerful men. She shows us how this story was shaped by current events in every retelling, highlighting the differences in the versions told in the first century BCE and the first century CE.
This isn’t the only relationship Prof. Beard calls us to rethink. Ancient writers praised Romulus’s vision for Rome: somehow he picked the perfect spot for the city. We now know that Rome was not “founded” in the mythological sense, that it did not begin as barren hills colonized by a single pair of brothers. But Rome’s location shaped Rome’s development such that the location was indeed an ideal spot for the city Rome became. The spot seemed perfect in retrospect because it had created a people who would view it as perfect.
Prof. Beard later reminds us that Rome’s expansion wasn’t really planned either. While Hollywood may encourage us to think of Romans as motivated by a manifest destiny that caused them to attempt to rule the whole world, the historical reality was rather different. There was no cabal of senators in 300 BCE with a master plan for Roman expansion. Rome’s early expansion was done piecemeal and by accident. It was always in response to some crisis, to protect the commercial interests of some wealthy Roman, or because some consul wanted to be sure of a triumph when he returned to Rome. Manifest destiny came later, after Rome was already a far-flung empire.
And this far-flung empire was as responsible for shaping the politics of Rome as the politics of Rome were for shaping the empire. Republican institutions could not cope with the challenges of empire. There was a century of chaos as the empire grew beyond its ability to be governed and then a realignment of the government with the emperor at its head that ensured 200 years of stability and (internal) peace.
Of the traditions the emperor usurped, the most interesting was the right to be a voice for the people. Prof. Beard talks about the challenges of representation the Romans faced, challenges familiar to us even today, namely: what is the purpose of legislators? Should they be a conduit for the voices of their constituents? Or should the try and do what is best for their constituents? A source of instability in the first century BCE was populist politicians who cleaved to the first view.
By ending elections, the emperors didn’t disenfranchise the people as much as they broke the bonds between the populist politicians and the people. The emperor (in theory and often in practice) stood up for the common male citizen of the empire. The elites, on the other hand, were left to derive favour and legitimacy solely from the emperor. They lost their connection to the people and therefore lost any ability to challenge the emperor for popular support.
We see a similar thing today in one party rule (where dictators often style themselves “Protector of the People”) or in “democratic dictatorships”. These dictators try and set up a myth that only they will look out for the majority of people. They’ll claim that others can’t be trusted because they are in the sway of special interest groups and economic, racial, sexual, or religious minorities (the Jews are a perennial favourite here).
Speaking of racial and religious minorities, Prof. Beard covers them in some detail. She reminds her readers that Rome was a cosmopolitan and diverse city. Imagining classical people as monolithically white is just as much a mistake as imagining their buildings that way. But Prof. Beard cautions us to avoid swinging our perceptions too far in the other direction. Rome was unusually welcoming of foreigners for a classical culture, but it still had discrimination based on provincial origin. Provincials would never be truly Roman in the eyes of all of the senate. This didn’t stop some of them becoming emperor, including Septimus Sevurus from North Africa, but it did mean they would have faced snide remarks.
I wish I could describe how provincials below senatorial rank were treated, but Prof. Beard has little to say about this. I don’t think it’s her fault. She is explicit that the history we have is largely the history of the elites. It’s their letters and proclamations, monuments and mausoleums from which we gather the majority of our understanding of life in ancient Rome. From the common people, we must make do with far less evidence.
Evidence is a common thread throughout this book. It is in some places as much a work of historiography as history. Prof. Beard cautions us against too good to be true stories (they probably are) and against being too eager to make even simple conclusions, like believing that a certain bust is actually of a certain historical figure. She also drove home in a way that I had never experienced before the sheer paucity of evidence we have for Roman life and deeds before the mid-200s BCE.
Nowhere does evidence and its paucity become as important as understanding the emperors. She describes autocracy as “in a sense, an end of history… there was no fundamental change in the structure of Roman politics, empire or society between the end of the first century BCE and the end of the second century CE”. And given this, she devotes at most a few pages to the combined individual achievements of the first 14 emperors (the book only covers up to 212 CE).
Instead of giving the normal account of the lives of the emperors, their battles and their victories, Prof. Beard focuses on the structure of the empire. She takes advantage of its relatively fixed nature during the rule of the first 14 emperors to go into detail on various facets of life in the empire. What was it like in the provinces? For the urban poor? The provincial elites? The slaves? The women? Many of these people left little historical mark, but Prof. Beard tries her best to give them some voice.
Prof. Beard views the emperors as largely interchangeable. Instead of fixating on “good” and “bad” emperors and turning their lives into moral lessons, she looks at what caused emperors to be described as good or bad in the first place. She believes it is all about legitimacy. When the succession was orderly, the successor could draw legitimacy from his predecessor, so it was in his interest to trumpet his predecessor’s virtues (and imply that as the rightful successor, he too possessed them). When the succession was disorderly (say, as the result of assassination), then this route to legitimacy was closed and the new emperor had to instead frame his reign as a break with a worse past. His predecessor would be smeared to turn the irregularity of his succession to the throne into an advantage.
As evidence, Prof. Beard points out that most of the vices found in “bad” emperors (from infidelity to wanton murder of senators) can also be found in the “good” as long as you read their biographies closely. She contends that this shows a difference in what is focused on, not a difference in behaviour. She points out how imposters to Nero would periodically show up in the provinces long after his death. Hucksters wouldn’t impersonate a universally reviled man.
Even if this isn’t true, Prof. Beard has one final beef with the theory of good and bad emperors: only the senate really cared. We don’t see any historical evidence of incompetence gross enough to touch the empire, it did just as well under Nero as it did under Hadrian. So even if the emperor was as liable to kill senators as talk with them, this was largely a problem for a few of the very wealthiest Romans in Rome.
To the common people in Rome it wouldn’t matter if the emperor was a saint or a psychopath, because they would never interact with him. This was doubly true for the people in the provinces. The plight of the poor was just as bad under Marcus Aurelius as it was under Nero.
Had you told me at the start that the author believed we had nothing to learn directly from the Romans, I probably wouldn’t have started this book. That would have been a grave mistake. I’m left with a deeper understanding of Roman history, the challenges posed in constructing it, and the challenges Roman history poses to us in the present day. I am left prepared to more readily question the beatification of leaders and foundational myths. I am left more alert to the nuances of people power in populism. And I’m left with a colossal respect for Professor Beard’s skill both as a historian and as a popularizer of history.