Last week I explained how poor decisions by central bankers (specifically failing to spur inflation) can make recessions much worse and lead to slower wage growth during recovery.
(Briefly: inflation during recessions reduces the real cost of payroll, cutting business expenses and making firing people unnecessary. During a recovery, it makes hiring new workers cheaper and so leads to more being hired. Because central bankers failed to create inflation during and after the great recession, many businesses are scared of raising salaries. They believe (correctly) that this will increase their payroll expenses to the point where they’ll have to lay many people off if another recession strikes. Until memories of the last recession fade or central bankers clean up their act, we shouldn’t expect wages to rise.)
Now I’d like to expand on an offhand comment I made about the minimum wage last week and explore how it can affect recovery, especially if it’s indexed to inflation.
The minimum wage represents a special case when it comes to pay cuts and layoffs in recessions. While it’s always theoretically possible to convince people to take a pay cut rather than a layoff (although in practice it’s mostly impossible), this option isn’t available for people who make the minimum wage. It’s illegal to pay them anything less. If bad times strike and business is imperiled, people making the minimum wage might have to be laid off.
I say “might”, because when central bankers aren’t proving useless, inflation can rescue people making the minimum wage from being let go. Inflation makes the minimum wage relatively less valuable, which reduces the cost of payroll relative to other inputs and helps to save jobs that pay minimum wage. This should sound familiar, because inflation helps people making the minimum wage in the exact same way it helps everyone else.
Because of increasingly expensive housing and persistently slow wage growth, some jurisdictions are experimenting with indexing the minimum wage to inflation. This means that the minimum wage rises at the same rate as the cost of living. Most notably (to me, at least), this group includes my home province of Ontario.
When the minimum wage is tied to inflation, recessions can become especially dangerous and drawn out.
With the minimum wage rising in lockstep with inflation, any attempts to decrease payroll costs in real terms (that is to say: inflation adjusted terms) is futile to the extent that payroll expenses are for minimum wage workers. Worse, people who were previously making above the minimum wage and might have had their jobs saved by inflation can be swept up by an increasingly high minimum wage.
This puts central bankers in a bind. As soon as the minimum wage is indexed to inflation, inflation is no longer a boon to all workers. Suddenly, many workers can find themselves in a “damned if you do, damned if you don’t” situation. Without inflation, they may be too expensive to keep. With it, they may be saved… until the minimum wage comes for them too. If a recession goes on long enough, only high-income workers would be sparred.
In addition, minimum wage (or near-minimum wage) workers who are laid off during a period of higher inflation (an in this scenario, there will be many) will suffer comparatively more, as their savings get exhausted even more quickly.
Navigating these competing needs would be an especially tough challenge for certain central banks like the US Federal Reserve – those banks that have dual mandates to maintain stable prices and full employment. If a significant portion of the US ever indexes its minimum wage to inflation, the Fed will have no good options.
It is perhaps darkly humorous that central banks, which bear an unusually large parcel of the blame for our current slow wage growth, stand to face the greatest challenges from the policies we’re devising to make up for their past shortcomings. Unfortunately, I think a punishment of this sort is rather like cutting off our collective nose to spite our collective face.
There are simple policies we could enact to counter the risks here. Suspending any peg to inflation during years that contain recessions (in Ontario at least, the minimum wage increase due to inflation is calculated annually) would be a promising start. Wage growth after a recession could be ensured with a rebound clause, or better yet, the central bank actually doing its job properly.
I am worried about the political chances (and popularity once enacted) of any such pragmatic policy though. Many people respond to recessions with the belief that the government can make things better by passing the right legislation – forcing the economy back on track by sheer force of ink. This is rarely the case, especially because the legislation that people have historically clamoured for when unemployment is high is the sort that increases wages, not lowers them. This is a disaster when unemployment threatens because of too-high wages. FDR is remembered positively for his policy of increasing wages during the great depression, even though this disastrous decision strangled the recovery in its crib. I don’t expect any higher degree of economic literacy from people today.
To put my fears more plainly, I worry that politicians, faced with waning popularity and a nipping recession, would find allowing the minimum wage to be frozen too much of a political risk. I frankly don’t trust most politicians to follow through with a freeze, even if it’s direly needed.
Minimum wages are one example of a tradeoff we make between broad access and minimum standards. Do we try and make sure everyone who wants a job can have one, or do we make sure people who have jobs aren’t paid too little for their labour, even if that hurts the unemployed? As long as there’s scarcity, we’re going to have to struggle with how we ensure that as many people as possible have their material needs met and that involves tradeoffs like this one.
But when we’re making these kind of compassionate decisions, we need to look at the risks of whatever systems we choose. Proponents of indexing the minimum wage to inflation haven’t done a good job of understanding the grave risk it poses to the health of our economy and perhaps most of all, to the very people they seek to help. In places like Ontario, where the minimum wage is already indexed to inflation, we’re going to pay for their lack of foresight next time an economic disaster strikes.
I write today about a speech that was once considered the greatest political speech in American history. Even today, after Reagan, Obama, Eisenhower, and King, it is counted among the very best. And yet this speech has passed from the history we have learned. Its speaker failed in his ambitions and the cause he championed is so archaic that most people wouldn’t even understand it.
I speak of Congressman Will J Bryan’s “Cross of Gold” speech.
William Jennings Bryan was a congressman from Nebraska, a lawyer, a three-time Democratic candidate for president (1896, 1900, 1908), the 41st Secretary of State, and oddly enough, the lawyer for the prosecution at the Scopes Monkey Trial. He was also a “silver Democrat”, one of the insurgents who rose to challenge Democratic President Grover Cleveland and the Democratic party establishment over their support for gold over a bimetallic (gold plus silver) currency system.
The dispute over bimetallic currency is now more than a hundred years old and has been made entirely moot by the floating US dollar and the post-Bretton Woods international monetary order. Still, it’s worth understanding the debate about bimetallism, because the concerns Bryan’s speech raised are still concerns today. Once you understand why Bryan argued for what he did, this speech transforms from dusty history into still-relevant insights into live issues that our political process still struggles to address.
When Alexander Hamilton was setting up a currency system for the United States, he decided that there would be a bimetallic standard. Both gold and silver currency would be issued by the mint, with the US Dollar specified in terms of both metals. Any citizen could bring gold or silver to the mint and have it struck into coins (for a small fee, which covered operating costs).
Despite congressional attempts to tweak the ratio between the metals, problems often emerged. Whenever gold was worth more by weight than it was as currency, it would be bought using silver and melted down for profit. Whenever the silver dollar was undervalued, the same thing happened to it. By 1847, the silver in coins was so overvalued that silver coinage had virtually disappeared from circulation and many people found themselves unable to complete low-value transactions.
Congress responded by debasing silver coins, which led to an increase in the supply of coins and for a brief time, there was a stable equilibrium where people actually could find and use silver coins. Unfortunately, the equilibrium didn’t last and the discovery of new silver deposits swung things in the opposite direction, leading to fears that people would use silver to buy gold dollars and melt them down outside the country. Since international trade was conducted in gold, it would have been very bad for America had all the gold coins disappeared.
Congress again responded, this time by burying the demonetization of several silver coins (including the silver dollar) in a bill that was meant to modernize the mint. The logic here was that no one would be able to buy up any significant amount of gold if they had to do it in nickels. Unfortunately for congress, a depression happened right after they passed the bill.
Some people blamed the depression on the change in coinage and popular sentiment in some corners became committed to the re-introduction of the silver dollar.
The silver supplies that caused this whole fracas hadn’t gone anywhere. People knew that re-introducing silver would have been an inflationary measure, as the statutory amount of silver in a dollar would have been worth about $0.75 in gold backed currency, but they largely didn’t care – or viewed that as a positive. The people clamouring for silver also didn’t conduct much international trade, so they didn’t mind if silver currency drove out gold and made trade difficult.
There were attempts to remonetize the silver dollar over the next twenty years, but they were largely unsuccessful. A few mine owners found markets for their silver at the mint when law demanded a series of one-off runs of silver coins, but congress never restored bimetallism to the point that there was any significant silver in circulation – or significant inflation. Even these limited silver-minting measures were repealed in 1893, which left the United States on a de facto gold standard.
For many, the need for silver became more urgent after the Panic of 1893, which featured everything a good Gilded Age panic normally did – bank runs, failing railways, declines in trade, credit crunches, a crash in commodity prices, and the inevitable run on the US gold reserves.
The commodity price crash hit farmers especially hard. They were heavily indebted and had no real way to pay it off – unless their debts were reduced by inflation. Since no one had found any large gold deposits anywhere (the Klondike gold rush didn’t actually produce anything until 1898 and the Fairbanks gold rush didn’t occur until 1902), that wasn’t going to happen on the gold standard. The Democrat grassroots quickly embraced bimetallism, while the party apparatus remained supporters of the post-1893 de facto gold standard.
This was the backdrop for Bryan’s Cross of Gold speech, which took place during summer 1896 at the Democratic National Convention in Chicago. He was already a famed orator and had been petitioning members of the party in secret for the presidential nomination, but his plans weren’t well known. He managed to go almost the entire convention without giving a speech. Then, once the grassroots had voted out the old establishment and began hammering out the platform, he arranged to be the closing speaker representing the delegates (about 66% of the total) who supported official bimetallism.
The convention had been marked by a lack of any effective oratory. In a stunning ten-minute speech (that stretched much longer because of repeated minutes-long interruptions for thunderous applause) Bryan singlehandedly changed that and won the nomination.
And this whole thing, the lobbying before the convention and the carefully crafted surprise moment, all of it makes me think of how effective Aaron Swartz’s Theory of Change idea can be when executed correctly.
Theory of Change says that if there’s something you want to accomplish, you shouldn’t start with what you’re good at and work towards it. You should start with the outcome you want and keep asking yourself how you’ll accomplish it.
Bryan decided that he wanted America to have a bimetallic currency. Unfortunately, there was a political class united in its opposition to this policy. That meant he needed a president that favoured it. Without the president, you need to get 66% of Congress and the Senate onboard and that clearly wasn’t happening with the country’s elites so hostile to silver.
Okay, well how do you get a president who’s in favour of restoring silver as currency? You make sure one of the two major parties nominates a candidate in favour of it, first of all. Since the Republicans (even then the party of big business) weren’t going to do it, it had to be the Democrats.
That means the question facing Bryan became: “how do you get the Democrats to pick a presidential candidate that supports silver?”
And this question certainly wasn’t easy. Bryan on his own couldn’t guarantee it, because it required delegates at least sympathetic to the idea. But there was a national backdrop such that that seemed likely, as long as there was a good candidate all of the “silver men” could unite around.
So, Bryan needed to ensure there was a good candidate and that that candidate got elected. Well, that was a problem, because neither of the two leading silver candidates were very popular. Luckily, Bryan was a Democrat, a former congressman, and kind of popular.
I think this is when the plan must have crystalized. Bryan just needed to deliver a really good speech to an already receptive audience. With the cachet from an excellent speech, he would clearly become the choice of silver supporting Democrats, become the Democratic party presidential candidate, and win the presidency. Once all that was accomplished, silver coins would become money again.
The fantastic thing is that it almost worked. Bryan was nominated on the Democratic ticket, absorbed the Populist party into the Democratic party to prevent a vote split, and came within 600,000 votes of winning the presidency. All because of a plan. All because of a speech.
So, what did he say?
Well, the full speech is available here. I do really recommend it. But I want to highlight three specific parts.
A Too Narrow Definition of “Business”
We say to you that you have made the definition of a business man too limited in its application. The man who is employed for wages is as much a business man as his employer; the attorney in a country town is as much a business man as the corporation counsel in a great metropolis; the merchant at the cross-roads store is as much a business man as the merchant of New York; the farmer who goes forth in the morning and toils all day—who begins in the spring and toils all summer—and who by the application of brain and muscle to the natural resources of the country creates wealth, is as much a business man as the man who goes upon the board of trade and bets upon the price of grain; the miners who go down a thousand feet into the earth, or climb two thousand feet upon the cliffs, and bring forth from their hiding places the precious metals to be poured into the channels of trade are as much business men as the few financial magnates who, in a back room, corner the money of the world. We come to speak of this broader class of business men.
In some ways, this passage is as much the source of the mythology of the American Dream as the inscription on the statue of liberty. Bryan rejects any definition of businessman that focuses on the richest in the coastal cities and instead substitutes a definition that opens it up to any common man who earns a living. You can see echoes of this paragraph in almost every presidential speech by almost every presidential candidate.
Think of anyone you’ve heard running for president in recent years. Now read the following sentence in their voice: “Small business owners – like Monica in Texas – who are struggling to keep their business running in these tough economic times need all the help we can give them”. It works because “small business owners” has become one of the sacred cows of American rhetoric.
Bryan added this line just days before he delivered the speech. It was the only part of the whole thing that was at all new. And because this speech inspired a generation of future speeches, it passed into the mythology of America.
Trickle Down or Trickle Up
Mr. Carlisle said in 1878 that this was a struggle between “the idle holders of idle capital” and “the struggling masses, who produce the wealth and pay the taxes of the country”; and, my friends, the question we are to decide is: Upon which side will the Democratic party fight; upon the side of “the idle holders of idle capital” or upon the side of “the struggling masses”? That is the question which the party must answer first, and then it must be answered by each individual hereafter. The sympathies of the Democratic party, as shown by the platform, are on the side of the struggling masses who have ever been the foundation of the Democratic party. There are two ideas of government. There are those who believe that, if you will only legislate to make the well-to-do prosperous, their prosperity will leak through on those below. The Democratic idea, however, has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.
Almost a full century before Reagan’s trickle-down economics, Democrats were taking a stand against that entire world-view. Through all its changes – from the party of slavery to the party of civil rights, from the party of the Southern farmers to the party of “coastal elites” – the Democratic party has always viewed itself as hewing to this one simple principle. Indeed, the core difference between the Republican party and the Democratic party may be that the Republican party views the role of government to “get out of the way” of the people, while the Democratic party believes that the job of government is to “make the masses prosperous”.
A Cross of Gold
Having behind us the producing masses of this nation and the world, supported by the commercial interests, the laboring interests, and the toilers everywhere, we will answer their demand for a gold standard by saying to them: “You shall not press down upon the brow of labor this crown of thorns; you shall not crucify mankind upon a cross of gold.
This is perhaps the best ending to a speech I have ever seen. Apparently at the conclusion of the address, dead silence endured for several seconds and Bryan worried he had failed. Two police officers in the audience were ahead of the curve and rushed Bryan – so that they could protect him from the inevitable crush.
Bryan turned what could have been a dry, dusty, nitty-gritty issue into the overriding moral question of his day. In fact, by co-opting the imagery of the crown of thorns and the cross, he tapped into the most powerful vein of moral imagery that existed in his society. Invoking the cross, the central mystery and miracle of Christianity cannot but help to put (in a thoroughly Christian society) an issue on a moral footing, as opposed to an intellectual one.
This sort of moral rather than intellectual posture is a hallmark of any insurgency against a technocratic order. Technocrats (myself among them!) like to pretend that we can optimize public policy. It is, to us, often a matter of just finding the solution that empirically provides the greatest good to the greatest number of people. Who could be against that?
But by presupposing that the only moral principle is the greatest good for the greatest number, we obviate moral contemplation in favour of tinkering with numbers and variables.
(The most cutting critique of utilitarianism I’ve ever seen delivered was: “[These problems are] seen in the light of a technical or practical difficulty and utilitarianism appeals to a frame of mind in which technical difficulty, even insuperable technical difficulty, is preferable to moral unclarity, no doubt because it is less alarming.”, a snide remark by the great British ethicist Sir Bernard Williams from his half of Utilitarianism for and against.)
This avoiding-the-question-so-we-can-tinker is a policy that can provoke a backlash like Bryan. Leaving aside entirely the difficulty of truly knowing which policies will have “good” results, there’s the uncomfortable truth that not every policy is positive sum. Even positive sum policies can hurt people. Bryan ran for president because questions of monetary policy aren’t politically neutral.
The gold standard, for all the intellectual arguments behind it, was hurting people. Maybe not a majority of people, but people nonetheless. There’s a whole section of the speech where Bryan points out that the established order cannot just say “changes will hurt my business”, because the current situation was hurting other people’s businesses too.
It is very tempting to write that questions of monetary policy “weren’t” politically neutral. After all, there’s a pretty solid consensus on monetary policy these days (well, except for the neo-Fisherians, but there’s a reason no one listens to them). But even (especially) a consensus among experts can be challenged by legitimate political disagreements. When the Fed chose to pull interest rates low as stimulus for the economy after 2008, it put the needs of people trying to find jobs over those of retired people who held their savings in safe bonds.
If you lower speed limits, you make roads safer for law abiding citizens and less safe for people who habitually speed. If you decriminalize drugs, you protect rich techies who microdose on LSD and hurt people who view decriminalization as license to dabble in opiates.
Even the best intentioned or best researched public policy can hurt people. Even if you (like me) believe in the greatest good for the greatest number of people, you have to remember that. You can’t ever let hurting people be easy or unthinking.
Even though it failed in its original aim and even though the cause it promotes is dead, I want people to remember Bryan’s speech. I especially want people who hold power to remember Bryan’s speech. Bryan chose oratory as his vehicle, his way of standing up for people who were hurt by well-intentioned public policy. In 1896, I might have stood against Bryan. But that doesn’t mean I want his speech and the lessons it teaches to be forgotten. Instead, I view it as a call to action, a call to never turn away from the people you hurt, even when you know you are doing right. A call to not forget them. A call to try and help them too.
It is a truth universally acknowledged that an academic over the age of forty must be prepared to write a book talking about how everything is going to hell these days. Despite literally no time in history featuring fewer people dying of malaria, dying in childbirth, dying of vaccine preventable illnesses, etc., it is very much in vogue to criticise the foibles of modern life. Heck, Ross Douthat makes a full-time job out of it over at the New York Times.
Enlightenment 2.0 follows the old Buddhist pattern. It claims that (1) there are problems with contemporary politics, (2) these problems arise because politics has become hostile to reason, (3) there is a way to have a second Enlightenment restore politics to how they were when they were ruled by reason, and (4) that way is to build politics from the ground up that encourage reason.
Now if you’re like me, you groaned when you read the bit about “restoring” politics to some better past state. My position has long been that there was never any shining age of politics where reason reigned supreme over partisanship. Take American politics. They became partisan quickly after independence, occasionally featured duels, and resulted in a civil war before the Republic even turned 100. America has had periods of low polarization, but these seem more incidental and accidental than the true baseline.
What really sets Heath apart is that he bothers to collect theoretical and practical support for a decline in reason. He’s the first person I’ve ever seen explain how reason could retreat from politics even as violence becomes less common and society becomes more complex.
His explanation goes like this: imagine that once every ten years politicians come up with an idea that helps them get elected by short-circuiting reason and appealing to baser instincts. It gets copied and used by everyone and eventually becomes just another part of campaigning. Over a hundred and fifty years, all of this adds up to a political environment that is specifically designed to jump past reason to baser instincts as soon as possible. It’s an environment that is actively hostile to reason.
We have some evidence of a similar process occurring in advertising. If you ever look at an old ad, you’ll see people trying to convince you that their product is the best. Modern readers will probably note a lot of “mistakes” in old ads. For example, they often admit to flaws in the general class of product they’re selling. They always talk about how their product fixes these flaws, but we now know that talking up the negative can leave people with negative affect. Advertising rarely mentions flaws these days.
Modern ads are much more likely to try and associate a product with an image, mood, or imagined future life. Cleaning products go with happy families and spotless houses. Cars with excitement or attractive potential mates.
In Heath’s view, one negative consequence of globalism is that all of the most un-reasonable inventions from around the world get to flourish everywhere and accumulate, in the same way that globalism has allowed all of the worst diseases of the world to flourish.
Heath paints a picture of reason in the modern world under siege in all realms, not just the political. In addition to the aforementioned advertising, Facebook tries to drag you in and keep you there forever. “Free to play” games want to take you for everything you’re worth and employ psychologists to figure out how. Detergent companies wreck your laundry machine by making it as hard as possible to measure the right amount of fabric softener.
(Seriously, have you ever tried to read the lines on the inside of a detergent cap? Everything, from the dark plastic to small font to multiple lines to the wideness of the cap is designed to make it hard to pour the correct amount of liquid for a single load.)
All of this would be worrying enough, but Heath identifies two more trends that represent a threat to a politics of reason.
First is the rise of Common Sense Conservatism. As Heath defines it, Common Sense Conservatism is the political ideology that elevates “common sense” to the principle political decision-making heuristic. “Getting government out of the way of businesses”, “tightening our belts when times are tight”, and “if we don’t burn oil someone else will” are some of the slogans of the movement.
This is a problem because common sense is ill-suited to our current level of civilizational complexity. Political economy is far too complicated to be managed by analogy to a family budget. Successful justice policy requires setting aside retributive instincts and acknowledging just how weak a force deterrence is. International trade is… I’ve read one newspaper article that correctly understood international trade this year and it was written by Paul fucking Krugman, the Nobel Prize winning economist.
As the built environment (Heath defines this as all the technology that now surrounds us) becomes more hostile to reason (think: detergent caps everywhere) and further from what our brains intuitively expect, common sense will give us worse and worse answers to our problems.
That’s not even to talk about coordination problems. Common Sense Conservatism seems inextricably tied to unilateralism and a competitive attitude (after all, it’s “common sense” that if someone else is winning, you must be losing). With many of the hardest problems facing us (global warming, AI, etc.) being co-ordination problems, Common Sense Conservatism specifically degrades the capacity of our political systems to respond to them.
The other problem is Jonathon Haidt. In practical terms, Haidt is much less of a problem than our increasingly hostile technology or the rise of Common Sense Conservatism, but he has spearheaded a potent theoretical attack on reason.
As I mentioned in my review of Haidt’s most important book, The Righteous Mind, Heath describes Haidt’s view of reason as “essentially confabulatory”. The driving point in The Righteous Mind is that a lot of what we consider to be “reason” is in fact post-facto justifications for our actions. Haidt describes his view as if we’re the riders on an elephant. We may think that we’re driving, but we’re actually the junior partner to our vastly more powerful unconscious.
(I’d like to point out that the case for elephant supremacy has collapsed somewhat over the past five years, as psychology increasingly grapples with its replication crisis; many studies Haidt relied upon are now retracted or under suspicion.)
Heath thought (even before some of Haidt’s evidence went the way of the dodo) that this was an incomplete picture and this disagreement forms much of the basis for recommendations made in Enlightenment 2.0.
Heath proposes a modification to the elephant/rider analogy. He’s willing to buy that our conscious mind has trouble resisting our unconscious desires, but he points out that our conscious mind is actually quite good (with a bit of practise) at setting us up so that we don’t have to deal with unconscious desires we don’t want. He likens this to hopping off the elephant, setting up a roadblock, then hopping back, secure in the knowledge that the elephant will have no choice but to go the way we’ve picked out for it.
A practical example: you know how it can be very hard to resist eating a cookie once you have a packet of them in your room? Well, you can actually make it much easier to resist the cookie if you put it somewhere inconveniently far from where you spend most of your time. You can resist it even better if you don’t buy it in the first place. Very few people are willing to drive to the store just because they have a craving for some sugar.
If you have a sweet tooth, it might be hard to resist buying those cookies. But Heath points out that there’s a solution even for this. One of our most powerful resources is each other. If you have trouble not buying unhealthy snacks at the last second, you can go shopping with a friend. You pick out groceries for her from her list and she’ll do the same for you. Since you’re going to be paying with each other’s money and giving everything over to each other at the end, you have no reason to buy sweets. Do this and you don’t have to spend all week trying not eat the cookie.
Heath believes the difference between people who are always productive and always distracted has far more to do with the environments they’ve built than anything innate. This feels at least half-true to me; I know I’m much less able to get things done when I don’t have my whole elaborate productivity system, or when it’s too easy for me to access the news or Facebook. In fact, I saw a dramatic improvement in my productivity – and a dramatic decrease in the amount of time I spent on Facebook – when I set up my computer to block it for a day after I spend fifteen minutes on it, uninstalled it from my phone, and made sure to keep it logged out on my phone’s browser.
(It’s trivially easy for me to circumvent any of these blocks; it takes about fifteen seconds. But that fifteen seconds is to enough to make quickly opening up a tab and being distracted unappealing.)
This all loops back to talking about how the current built environment is hostile to reason – as well as a host of other things that we might like to be better at.
Take lack of sleep. Before reading Enlightenment 2.0, I hadn’t realized just how much of a modern problem this is. During Heath’s childhood, TVs turned off at midnight, everything closed by midnight, and there were no videogames or cell phones or computers. Post-midnight, you could… read? Heath points out that this tends to put people to sleep anyway. Spend time with people already at your house? How often did that happen? You certainly couldn’t call someone and invite them over, because calling people after midnight doesn’t discriminate between those awake and those asleep. Calling a land line after midnight is still reserved for emergencies. Texting people after midnight is much less intrusive and therefore much politer.
Without all the options modern life gives, there wasn’t a whole lot of things that really could keep you up all night. Heath admits to being much worse at sleeping now. Video games and online news conspire to often keep him up later than he would like. Heath is a professor and the author of several books, which means he’s a probably a very self-disciplined person. If he can’t even ignore news and video games and Twitter in favour of a good night’s sleep, what chance do most people have?
Society has changed in the forty some odd years of his life in a way that has led to more freedom, but an unfortunate side effect of freedom is that it often includes the freedom to mess up our lives in ways that, if we were choosing soberly, we wouldn’t choose. I don’t know anyone who starts an evening with “tonight, I’m going to stay up late enough to make me miserable tomorrow”. And yet technology and society conspire to make it all too easy to do this over the feeble objections of our better judgement.
It’s probably too late to put this genie back in its bottle (even if we wanted to). But Heath contends it isn’t too late to put reason back into politics.
Returning reason to politics, to Heath, means building up social and procedural frameworks like the sort that would help people avoid staying up all night or wasting the weekend on social media. In means setting up our politics so that contemplation and co-operation isn’t discouraged and so that it is very hard to appeal to people’s base nature.
Part of this is as simple as slowing down politics. When politicians don’t have time to read what they’re voting on, partisanship and fear drive what they vote for. When they instead have time to read and comprehend legislation (and even better, their constituents have time to understand it and tell their representatives what they think), it is harder to pass bad bills.
When negative political advertisements are banned or limited (perhaps with a total restriction on election spending), fewer people become disillusioned with politics and fewer people use cynicism as an excuse to give politicians carte blanche to govern badly. When Question Period in parliament isn’t filmed, there’s less incentive to volley zingers and talking points back and forth.
One question Heath doesn’t really engage with: just how far is it okay to go to ensure reason has a place in politics? Enlightenment 2.0 never goes out and says “we need a political system that makes it harder for idiots to vote”, but there’s a definite undercurrent of that in the latter parts. I’m also reminded of Andrew Potter’s opposition to referendums and open party primaries. Both of these political technologies give more people a voice in how the country is run, but do tend to lead to instability or worse decisions than more insular processes (like representative parliaments and closed primaries).
Basically, it seems like if we’re aiming for more reasonable politics, then something might have to give on the democracy front. There are a lot of people who aren’t particularly interested in voting with anything more than their base instincts. Furthermore, given that a large chunk of the right has more-or-less explicitly abandoned “reason” in favour of “common sense”, aiming to increase the amount of “reason” in politics certainly isn’t politically neutral.
(I should also mention that many people on the left only care about empiricism and reason when it comes to global warming and are quite happy to pander to feelings on topics like vaccines or rent control. From my personal vantage point, it looks like left-wing political parties have fallen less under the sway of anti-rationalism, but your mileage may vary.)
Perhaps there’s a coalition of people in the centre, scared of the excess of the extreme left and the extreme right that might feel motivated to change our political system to make it more amiable to reason. But this still leaves a nasty taste in my mouth. It still feels like cynical power politics.
While there might not be answers in Enlightenment 2.0 (or elsewhere), I am heartened that this is a question that Heath is at least still trying to engage with.
Enlightenment 2.0 is going to be one of those books that, on a fundamental level, changes how I look at politics and society. I had an inkling that shaping my environment was important and I knew that different political systems lead to different strategies and outcomes. But the effect of Enlightenment 2.0 was to make me so much more aware of this. Whenever I see Google rolling out a new product, I now think about how it’s designed to take advantage of us (or not!). Whenever someone suggests a political reform, I first think about the type of discourse and politics it will promote and which groups and ideologies that will benefit.
In some parts of the Brazilian Amazon, indigenous groups still practice infanticide. Children are killed for being disabled, for being twins, or for being born to single mothers. This is undoubtedly a piece of cultural technology that existed to optimize resource distribution under harsh conditions.
Infanticide can be legally practiced because these tribes aren’t bound by Brazilian law. Under legislation, indigenous tribes are bound by the laws in proportion to how much they interact with the state. Remote Amazonian groups have a waiver from all Brazilian laws.
Reformers, led mostly by disabled indigenous people who’ve escaped infanticide and evangelicals, are trying to change this. They are pushing for a law that will outlaw infanticide, register pregnancies and birth outcomes, and punish people who don’t report infanticide.
Now I know that I have in the past written about using the outside view in cases like these. Historically, outsiders deciding they know what is best for indigenous people has not ended particularly well. In general, this argues for avoiding meddling in cases like this. Despite that, if I lived in Brazil, I would support this law.
When thinking about public policies, it’s important to think about the precedents they set. Opposing a policy like this, even when you have very good reasons, sends a message to the vast majority of the population, a population that views infanticide as wrong (and not just wrong, but a special evil). It says: “we don’t care about what is right or wrong, we’re moral relativists who think anything goes if it’s someone’s culture.”
There are several things to unpack here. First, there are the direct effects on the credibility of the people defending infanticide. When you’re advocating for something that most people view as clearly wrong, something so beyond the pale that you have no realistic chance of ever convincing anyone, you’re going to see some resistance to the next issue you take up, even if it isn’t beyond the pale. If the same academics defending infanticide turn around and try and convince people to accept human rights for trans people, they’ll find themselves with limited credibility.
Critically, this doesn’t happen with a cause where it’s actually possible to convince people that you are standing up for what is right. Gay rights campaigners haven’t been cut out of the general cultural conversation. On the contrary, they’ve been able to parlay some of their success and credibility from being ahead of the curve to help in related issues, like trans rights.
There’s no (non-apocalyptic) future where the people of Brazil eventually wake up okay with infanticide and laud the campaigners who stood up for it. But the people of Brazil are likely to wake up in the near future and decide they can’t ever trust the morals of academics who advocated for infanticide.
Second, it’s worth thinking about how people’s experience of justice colours their view of the government. When the government permits what is (to many) a great evil, people lose faith in the government’s ability to be just. This inhibits the government’s traditional role as solver of collective action problems.
We can actually see this manifest several ways in current North American politics, on both the right and the left.
On the left, there are many people who are justifiably mistrustful of the government, because of its historical or ongoing discrimination against them or people who look like them. This is why the government can credibly lock up white granola-crowd parents for failing to treat their children with medically approved medicines, but can’t when the parents are indigenous. It’s also why many people of colour don’t feel comfortable going to the police when they see or experience violence.
In both cases, historical injustices hamstring the government’s ability to achieve outcomes that it might otherwise be able to achieve if it had more credibly delivered justice in the past.
On the right, I suspect that some amount of skepticism of government comes from legalized abortion. The right is notoriously mistrustful of the government and I wonder if this is because it cannot believe that a government that permits abortion can do anything good. Here this hurts the government’s ability to pursue the sort of redistributive policies that would help the worst off.
In the case of abortion, the very real and pressing need for some women to access it is enough for me to view it as net positive, despite its negative effect on some people’s ability to trust the government to solve coordination problems.
Discrimination causes harms on its own and isn’t even justified on its own “merits”. It’s effect on peoples’ perceptions of justice are just another reason it should be fought against.
In the case of Brazil, we’re faced with an act that is negative (infanticide) with several plausible alternatives (e.g. adoption) that allow the cultural purpose to be served without undermining justice. While the historical record of these types of interventions in indigenous cultures should give us pause, this is counterbalanced by the real harms justice faces as long as infanticide is allowed to continue. Given this, I think the correct and utilitarian thing to do is to support the reformers’ effort to outlaw infanticide.
No, this isn’t a post about very pretty houses or positional goods. It’s about the type of beauty contest described by John Maynard Keynes.
Imagine a newspaper that publishes one hundred pictures of strapping young men. It asks everyone to send in the names of the five that they think are most attractive. They offer a prize: if your selection matches the five men most often appearing in everyone else’s selections, you’ll win $500.
You could just do what the newspaper asked and send in the names of those men that you think are especially good looking. But that’s not very likely to give you the win. Everyone’s tastes are different and the people you find attractive might not be very attractive to anyone else. If you’re playing the game a bit smarter, you’ll instead pick the five people that you think have the broadest appeal.
You could go even deeper and realize that many other people will be trying to win and so will also be trying to pick the most broadly appealing people. Therefore, you should pick people that you think most people will view as broadly appealing (which differs from picking broadly appealing people if you know something about what most people find attractive that isn’t widely known). This can go on indefinitely (although Yudkowsky’s Law of Ultrafinite Recursion states that “In practice, infinite recursions are at most three levels deep“, which gives me a convenient excuse to stop before this devolves into “I know you know I know that you know that…” ad infinitum).
This thought experiment was relevant to an economist because many assets work like this. Take gold: its value cannot to be fully explained by its prettiness or industrial usefulness; some of its value comes from the belief that someone else will want it in the future and be willing to pay more for it than they would a similarly useful or pretty metal. For whatever reason, we have a collective delusion that gold is especially valuable. Because this delusion is collective enough, it almost stops being a delusion. The delusion gives gold some of its value.
When it comes to houses, beauty contests are especially relevant in Toronto and Vancouver. Faced with many years of steadily rising house prices, people are willing to pay a lot for a house because they believe that they can unload it on someone else in a few years or decades for even more.
When talking about highly speculative assets (like Bitcoin), it’s easy to point out the limited intrinsic value they hold. Bitcoin is an almost pure Keynesian Beauty Contest asset, with most of its price coming from an expectation that someone else will want it at a comparable or better price in the future. Houses are obviously fairly intrinsically valuable, especially in very desirable cities. But the fact that they hold some intrinsic value cannot by itself prove that none of their value comes from beliefs about how much they can be unloaded for in the future – see again gold, which has value both as an article of commerce and as a beauty contest asset.
There’s obviously an element of self-fulfilling prophecy here, with steadily increasing house prices needed to sustain this myth. Unfortunately, the housing market seems especially vulnerable to this sort of collective mania, because the sunk cost fallacy makes many people unwilling to sell their houses at a price below what they paid for it. Any softening of the market removes sellers, which immediately drives up prices again. Only a massive liquidation event, like we saw in 2007-2009 can push enough supply into the market to make prices truly fall.
But this isn’t just a self-fulfilling prophecy. There’s deliberateness here as well. To some extent, public policy is used to guarantee that house prices continue to rise. NIMBY residents and their allies in city councils deliberately stall projects that might affect property values. Governments provide tax credits or access to tax-advantaged savings accounts for homes. In America, mortgage payments provide a tax credit!
All of these programs ultimately make housing more expensive wherever supply cannot expand to meet the artificially increased demand – which basically describes any dense urban centre. Therefore, these home buying programs fail to accomplish their goal of making house more affordable, but do serve to guarantee that housing prices will continue to go up. Ultimately, they really just represent a transfer of wealth from taxpayers generally to those specific people who own homes.
Unfortunately, programs like this are very sticky. Once people buy into the collective delusion that home prices must always go up, they’re willing to heavily leverage themselves to buy a home. Any dip in the price of homes can wipe out the value of this asset, making it worth less than the money owed on it. Since this tends to make voters very angry (and also lead to many people with no money) governments of all stripes are very motivated to avoid it.
This might imply that the smart thing is to buy into the collective notion that home prices always go up. There are so many people invested in this belief at all levels of society (banks, governments, and citizens) that it can feel like home prices are too important to fall.
Which would be entirely convincing, except, I’m pretty sure people believed that in 2007 and we all know how that ended. Unfortunately, it looks like there’s no safe answer here. Maybe the collective mania will abate and home prices will stop being buoyed ever upwards. Or maybe they won’t and the prices we currently see in Toronto and Vancouver will be reckoned cheap in twenty years.
Better zoning laws can help make houses cheaper. But it really isn’t just zoning. The beauty contest is an important aspect of the current unaffordability.
I don’t understand why people choose to go bankrupt living the most expensive cities, but I’m increasingly viewing this as a market failure and collective action problem to be fixed with intervention, not a failure of individual judgement.
There are many cities, like Brantford, Waterloo, or even Ottawa, where everything works properly. Rent isn’t really more expensive than suburban or rural areas. There’s public transit, which means you don’t necessarily need a car, if you choose where you live with enough care. There are plenty of jobs. Stuff happens.
But cities like Toronto, Vancouver, and San Francisco confuse the hell out of me. The cost of living is through the roof, but wages don’t even come close to following (the difference in salary between Toronto and Waterloo for someone with my qualifications is $5,000, which in no way would cover the yearly difference in living expenses). This is odd when talking about well-off tech workers, but becomes heartbreaking when talking about low-wage workers.
If people were perfectly rational and only cared about money (the mythical homo economicus), fewer people would move to cities, which would bid up wages (to increase the supply of workers) or drive down prices (because fewer people would be competing for the same apartments), which would make cities more affordable. But people do care about things other than money and the network effects of cities are hard to beat (put simply: the bigger the city, the more options for a not-boring life you have). So, people move – in droves – to the most expensive and dynamic cities and wages don’t go up (because the supply of workers never falls) and the cost of living does (because the number of people competing for housing does) and low wage workers get ground up.
It’s not that I don’t understand the network effects. It’s that I don’t understand why people get ground up instead of moving.
But the purpose of good economics is to deal with people as they are, not as they can be most conveniently modeled. And given this, I’ve begun to think about high minimum wages in cities as an intervention that fixes a market failure and collective action problem.
That is to say: people are bad at reading the market signal that they shouldn’t move to cities that they can’t afford. It’s the signal that’s supposed to say here be scarce goods, you might get screwed, but the siren song of cities seems to overpower it. This is a market failure in the technical sense because there exists a distribution of goods that could make people (economically) better off (fewer people living in big cities) without making anyone worse off (e.g. they could move to communities that are experiencing chronic shortages of labour and be basically guaranteed jobs that would pay the bills) that the market cannot seem to fix.
It’s a collective action problem because if everyone could credibly threaten to move, then they wouldn’t have to; the threat would be enough to increase wages. Unfortunately, everyone knows that anyone who leaves the city will be quickly replaced. Everyone would be better off if they could coordinate and make all potential movers promise not to move in until wages increase, but there’s no benefit to being the first person to leave or the first person to avoid moving  and there currently seems to be no good way for everyone to coordinate in making a threat.
When faced with the steady grinding down of young people, low wage workers, and everyone “just waiting for their big break“, we have two choices. We can do tut-tut at their inability to be “rational” (aka leave their friends, family, jobs, and aspirations to move somewhere else ), or we can try to better their situation.
If everyone was acting “rationally”, wages would be bid up. But we can accomplish the same thing by simple fiat. Governments can set a minimum wage or offer wage subsidies, after all.
I do genuinely worry that in some places, large increases in the minimum wage will lead to unemployment (we’ll figure out whether this is true over the next decade or so). I’m certainly worried that a minimum wage pegged to inflation will lead to massive problems the next time we have a recession .
So, I think we should fix zoning, certainly. And I think we need to fix how Ontario’s minimum wage functions in a recession so that it doesn’t destroy our whole economy during the next one. But at the same time, I think we need to explore differential minimum wages for our largest cities and the rest of the province/country. I mean this even in a world where the current minimum $14/hour wage isn’t rolled back. Would even $15/hour cut it in Toronto and Vancouver ?
If we can’t make a minimum wage work without increased unemployment, then maybe we’ll have to turn to wage subsidies. This is actually the method that “conservative” economist Scott Sumner favours .
What’s clear to me is that what we’re currently doing isn’t working.
I do believe in a right to shelter. Like anyone who shares this belief, I understand that “shelter” is a broad word, encompassing everything from a tarp to a mansion. Where a certain housing situation falls on this spectrum is the source of many a debate. Writing this is a repudiation of my earlier view, that living in an especially desirable city was a luxury not dissimilar from a mansion.
A couple of things changed my mind. First, I paid more attention to the experiences of my friends who might be priced out of the cities they grew up in and have grown to love. Second, I read the Ecomodernist Manifesto, with its calls for densification as the solution to environmental degradation and climate change. Densification cannot happen if many people are priced out of cities, which means figuring this out is actually existentially important.
The final piece of the puzzle was the mental shift whereby I started to view wages in cities – especially for low-wage earners – as a collective action problem and a market failure. As anyone on the centre-left can tell you, it’s the government’s job to fix those – ideally in a redistributive way.
 This is inductive up to the point where you have a critical mass; there’s no benefit until you’re the nth + 1 person, where n is the number of people necessary to create a scarcity of workers sufficient to begin bidding up wages. And all of the people who moved will see little benefit for their hassle, unless they’re willing to move back. ^
 For us nomadic North Americans, this can be confusing: “The gospel of ‘just pick up and leave’ is extremely foreign to your typical European — be they Serbian, French or Irish. Ditto with a Sudanese, Afghan or Japanese national. In Israel, it’s the kind of suggestion that ruins dinner parties… We non-indigenous love to move. We don’t just see it as just good economic policy, but as a virtue. We glorify the immigrant, we hug them at the airport when they arrive and we inherently mistrust anyone who dares to pine for what they left behind”. ^
 I think we may have to subsidize some new construction or portion of monthly rent so that all increased wages don’t get ploughed into to increased rents. If you have more money chasing the same number of rental units and everything else remains constant, you’ll see all gains in wages erased by increases in rents. Rent control is a very imperfect solution, because it changes new construction into units that can be bought outright, at market rates. This helps people who have saved up a lot of money outside of the city and what to move there, but is very bad for the people living there, grappling with rent so high that they can’t afford to save up a down payment. ^
 No seriously, this is what passes for conservative among economists these days; while we all stopped looking, they all became utilitarians who want to help impoverished people as much as possible. ^
In simple economic theory, wages are supposed to act as signals. When wages increase in a sector, it should signal people that there’s lots of work to do there, incentivizing training that will be useful for that field, or causing people to change careers. On the flip side, when wages decrease, we should see a movement out of that sector.
This is all well and good. It explains why the United States has seen (over the past 45 years) little movement in the number of linguistics degrees, a precipitous falloff in library sciences degrees, some decrease in English degrees, and a large increase in engineering and business degrees .
This might be the engineer in me, but I find things that are working properly boring. What I’m really interested in is when wage signals break down and are replaced by a job lottery.
Job lotteries exist whenever there are two tiers to a career. On one hand, you’ll have people making poverty wages and enduring horrendous conditions. On the other, you’ll see people with cushy wages, good job security, and (comparatively) reasonable hours. Job lotteries exist in the “junior doctor” system of the United Kingdom, in the academic system of most western countries, and teaching in Ontario (up until very recently). There’s probably a much less extreme version of this going on even in STEM jobs (in that many people go in thinking they’ll work for Google or the next big unicorn and end up building websites for the local chamber of commerce or writing internal tools for the company billing department ). A slightly different type of job lottery exists in industries where fame plays a big role: writing, acting, music, video games, and other creative endeavours.
Job lotteries are bad for two reasons. Compassionately, it’s really hard to see idealistic, bright, talented people endure terribly conditions all in the hope of something better, something that might never materialize. Economically, it’s bad when people spend a lot of time unemployed or underemployed because they’re hopeful they might someday get their dream job. Both of these reasons argue for us to do everything we can to dismantle job lotteries.
I do want to make a distinction between the first type of job lottery (doctors in the UK, professor, teachers), which is a property of how institutions have happened to evolve, and the second, which seems much more inherent to human nature. “I’ll just go with what I enjoy” is a very common media strategy that will tend to split artists (of all sorts) into a handful of mega-stars, a small group of people making a modest living, and a vast mass of hopefuls searching for their break. To fix this would require careful consideration and the building of many new institutions – projects I think we lack the political will and the know-how for.
The problems in the job market for professors, doctors, or teachers feel different. These professions don’t rely on tastemakers and network effects. There’s also no stark difference in skills that would imply discontinuous compensation. This doesn’t imply that skills are flat – just that they exist on a steady spectrum, which should imply that pay could reasonably follow a similar smooth distribution. In short, in all of these fields, we see problems that could be solved by tweaks to existing institutions.
I think institutional change is probably necessary because these job lotteries present a perfect storm of misdirection to our primate brains. That is to say (1) People are really bad at probability and (2) the price level for the highest earners suggests that lots of people should be entering the industry. Combined, this means that people will be fixated on the highest earners, without really understanding how unlikely that is to be them.
Two heuristics drive our inability to reason about probabilities: the representativeness heuristic (ignoring base rates and information about reliability in favour of what feels “representative”) and the availability heuristic (events that are easier to imagine or recall feel more likely). The combination of these heuristics means that people are uniquely sensitive to accounts of the luckiest members of a profession (especially if this is the social image the profession projects) and unable to correctly predict their own chances of reaching that desired outcome (because they can imagine how they will successfully persevere and make everything come out well).
Right now, you’re probably laughing to yourself, convinced that you would never make a mistake like this. Well let’s try an example.
Imagine a scenario is which only ten percent of current Ph. D students will get tenure (basically true). Now Ph. D students are quite bright and are incredibly aware of their long odds. Let’s say that if a student three years into a program makes a guess as to whether or not they’ll get a tenure track job offer, they’re correct 80% of the time. If a student tells you they think they’ll get a tenure track job offer, how likely do you think it is that they will? Stop reading right now and make a guess.
Seriously, make a guess.
This won’t work if you don’t try.
Okay, you can keep reading.
It is not 80%. It’s not even 50%. It’s 31%. This is probably best illustrated visually.
There are four things that can happen here (I’m going to conflate tenure track job offers with tenure out of a desire to stop typing “tenure track job offers”).
A student can correctly believe they will get tenure
A student can incorrectly believe they will get tenure
A student can correctly believe they won’t get tenure
Ten students will get tenure. Of these ten, eight (0.8 x 10) will correctly believe they will get it (1/green) and two (10 – 0.8 x 10) will incorrectly believe they won’t (2/yellow). Ninety students won’t get tenure. Of these 90, 18 (90 – 0.8 x 90) will incorrectly believe they will get tenure (3/orange) and 72 (0.8 x 90) will correctly believe they won’t get tenure (4/red). Twenty-six students, those coloured green (1) and orange (3) believe they’ll get tenure. But we know that only eight of them really will – which works out to just below the 31% I gave above.
Almost no one can do this kind of reasoning, especially if they aren’t primed for a trick. The stories we build in our head about the future feel so solid that we ignore the base rate. We think that we’ll know if we’re going to make it. And even worse, we think that a feeling of “knowing” if we’ll make it provides good information. We think that relatively accurate predictors provide useful information against a small chance. They clearly don’t. When the base rate is small (here 10%), the base rate is the single greatest predictor of your chances.
But this situation doesn’t even require small chances for us to make mistakes. Imagine you had two choices: a career that leaves you feeling fulfilled 100% of the time, but is so competitive that you only have an 80% chance of getting into it (assume in the other 20%, you either starve or work a soul-crushing fast food job with negative fulfillment) or a career where you are 100% likely to get a job, but will only find it fulfilling 80% of the time.
Unless that last 20% of fulfillment is strongly super-linear , or you don’t have any value at all on eating/avoiding McDrugery, it is better to take the guaranteed career. But many people looking at this probably rounded 80% to 100% – another known flaw in human reasoning. You can very easily have a job lottery even when the majority of people in a career are in the “better” tier of the job, because many entrants to the field will view “majority” as all and stick with it when they end up shafted.
Now, you might believe that these problems aren’t very serious, or that surely people making a decision as big as a college major or career would correct for them. But these fallacies date to the 70s! Many people still haven’t heard of them. And the studies that first identified them found them to be pretty much universal. Look, the CIA couldn’t even get people to do probability right. You think the average job seeker can? You think you can? Make a bunch of predictions for the next year and then talk with me when you know how calibrated (or uncalibrated) you are.
If we could believe that people would become better at probabilities, we could assume that job lotteries would take care of automatically. But I think it is clear that we cannot rely on that, so we must try and dismantle them directly. Unfortunately, there’s a reason many are this way; many of them have come about because current workers have stacked the deck in their own favour. This is really great for them, but really bad for the next group of people entering the workforce. I can’t help but believe that some of the instability faced by millennials is a consequence of past generations entrenching their benefits at our expense . Others have come about because of poorly planned policies, bad enrolment caps, etc.
These cover the two ways we can deal with a job lottery, we can limit the supply indirectly (by making the job, or the perception of the job once you’ve “made it” worse), or limit the supply directly (by changing the credentials necessary of the job, or implementing other training caps) . In many of the examples of job lotteries I’ve found, limiting the supply directly might be a very effective way to deal with the problem.
Why? Because having people who’ve completed four years of university do an extra year or two of schooling only to wait around and hope for a job is a real drag. They could be doing something productive with that time! The advantage of increasing gatekeeping around a job lottery and increasing it as early as possible is that you force people to go find something productive to do. It is much better for an economy to have hopeful proto-teachers who would in fact be professional resume submitters go into insurance, or real estate, or tutoring, or anything at all productive and commensurate with their education and skills.
There’s a cost here, of course. When you’re gatekeeping (for e.g. teacher’s college or medical school), you’re going to be working with lossy proxies for the thing you actually care about, which is performance in the eventual job. The lossier the proxy, the more you are needlessly depressing the quality of people who are allowed to do the job – which is a serious concern when you’re dealing with heart surgery – or the people providing foundational education to your next generation.
You can also find some cases where increasing selectiveness in an early stage doesn’t successfully force failed applicants to stop wasting their time and get on with their life. I was very briefly enrolled in a Ph. D program for biomedical engineering a few years back. Several professors I interviewed with while considering graduate school wanted to make sure I had no aspirations on medical school – because they were tired of their graduate students abandoning research as soon as their Ph. D was complete. For these students who didn’t make it into medical school after undergrad, a Ph. D was a ticket to another shot at getting in . Anecdotally, I’ve seen people who fail to get into medical school or optometry get a master’s degree, then try again.
Banning extra education before medical school cuts against the idea that people should be able to better themselves, or persevere to get to their dreams. It would be institutionally difficult. But I think that it would, in this case, probably be a net good.
There are other fields where limiting supply is rather harmful. Graduate students are very necessary for science. If we punitively limited their number, we might find a lot of valuable scientific progress falling to a stand-still. We could try and replace graduate students with a class of professional scientific assistants, but as long as the lottery for professorship is so appealing (for those who are successful), I bet we’d see a strong preference for Ph. D programs over professional assistantships.
These costs sometimes make it worth it to go right to the source of the job lottery, the salaries and benefits of people already employed . Of course, this has its own downsides. In the case of doctors, high salaries and benefits are useful for making really clever applicants choose to go into medicine rather than engineering and law. For other jobs, there’s the problems of practicality and fairness.
First, it is very hard to get people to agree to wage or benefit cuts and it almost always results in lower morale – even if you have “sound macro-economic reasons” for it. In addition, many jobs with lotteries have them because of union action, not government action. There is no czar here to change everything. Second, people who got into those careers made those decisions based on the information they had at the time. It feels weird to say “we want people to behave more rationally in the job market, so by fiat we will change the salaries and benefits of people already there.” The economy sometimes accomplishes that on its own, but I do think that one of the roles of political economics is to decrease the capriciousness of the world, not increase it.
We can of course change the salaries and benefits only for new employees. But this somewhat confuses the signalling (for a long time, people will still have principle examples of the profession come from the earlier cohort). It also rarely alleviates a job lottery, because in practice people set this up for new employees to have reduced salaries and benefits for a time. Once they get seniority, they’ll expect to enjoy all the perks of seniority.
Adjunct professorships feel like a failed attempt to remove the job lottery for full professorships. Unfortunately, they’ve only worsened it, by giving people a toe-hold that makes them feel like they might someday claw their way up to full professorship. I feel that when it comes to professors, the only tenable thing to do is greatly reduce salaries (making them closer to the salary progression of mechanical engineers, rather than doctors), hire far more professors, cap graduate students wherever there is high under- and un- employment, and have more professional assistants who do short 2-year college courses. Of course, this is easy to say and much harder to do.
If these problems feel intractable and all the solutions feel like they have significant downsides, welcome to the pernicious world of job lotteries. When I thought of writing about them, coming up with solutions felt like by far the hardest part. There’s a complicated trade-off between proportionality, fairness, and freedom here.
Old fashioned economic theory held that the freer people were, the better off they would be. I think modern economists increasingly believe this is false. Is a world in which people are free to get very expensive training – despite very long odds for a job and cognitive biases that make understanding just how punishing the odds are – expensive training, in short, that they’d in expectation be better off without, a better one than a world where they can’t?
I increasingly believe that it isn’t. And I increasingly believe that having rough encounters with reality early on and having smooth salary gradients is important to prevent this world. Of course, this is easy for me to say. I’ve been very deliberate taking my skin out of job lotteries. I dropped out of graduate school. I write often and would like to someday make money off of writing, but I viscerally understand the odds of that happening, so I’ve been very careful to have a day job that I’m happy with .
If you’re someone who has made the opposite trade, I’m very interested in hearing from you. What experiences do you have that I’m missing that allowed you to make that leap of faith?
 I should mention that there’s a difference between economic value, normative/moral value, and social value and I am only talking about economic value here. I wouldn’t be writing a blog post if I didn’t think writing was important. I wouldn’t be learning French if I didn’t think learning other languages is a worthwhile endeavour. And I love libraries.
And yes, I know there are many career opportunities for people holding those degrees and no I don’t think they’re useless. I simply think a long-term shift in labour market trends have made them relatively less attractive to people who view a degree as a path to prosperity. ^
 That’s not to knock these jobs. I found my time building internal tools for an insurance company to be actually quite enjoyable. But it isn’t the fame and fortune that some bright-eyed kids go into computer science seeking. ^
 That is to say, that you enjoy each additional percentage of fulfillment at a multiple (greater than one) of the previous one. ^
 This almost certainly isn’t true, given that the marginal happiness curve for basically everything is logarithmic (it’s certainly true for money and I would be very surprised if it wasn’t true for everything else); people may enjoy a 20% fulfilling career twice as much as a 10% fulfilling career, but they’ll probably enjoy a 90% fulfilling career very slightly more than an 80% fulfilling career. ^
 I really hope that this doesn’t catch on. If an increasing number of applicants to medical school already have graduate degrees, it will be increasingly hard for those with “merely” an undergraduate degree to get in to medical school. Suddenly we’ll be requiring students to do 11 years of potentially useless training, just so that they can start the multi-year training to be a doctor. This sort of arms race is the epitome of wasted time.
In many European countries, you can enter medical school right out of high school and this seems like the obviously correct thing to do vis a vis minimizing wasted time. ^
The taxi medallion system that Uber has largely supplanted prevented this. It moved the job lottery one step further back, with getting the medallion becoming the primary hurdle, forcing those who couldn’t get one to go work elsewhere, but allowing taxi drivers to largely avoid dead times.
Uber could restrict supply, but it doesn’t want to and its customers certainly don’t want it to. Uber’s chronic driver oversupply (relative to a counterfactual where drivers waited around very little) is what allows it to react quickly during peak hours and ensure there’s always an Uber relatively close to where anyone would want to be picked up. ^
 I do think that I would currently be a much better writer if I’d instead tried to transition immediately to writing, rather than finding a career and writing on the side. Having a substantial safety net removes almost all of the urgency that I’d imagine I’d have if I was trying to live on (my non-existent) writing income.
There’s a flip side here too. I’ve spent all of zero minutes trying to monetize this blog or worrying about SEO, because I’m not interested in that and I have no need to. I also spend zero time fretting over popularizing anything I write (again, I don’t enjoy this). Having a security net makes this something I do largely for myself, which makes it entirely fun. ^
When you worry about rising inequality, what are you thinking about?
I now know of two competing models for inequality, each of which has vastly different implications for political economy.
In the first, called consumptive inequality, inequality is embodied in differential consumption. Under this model, there is a huge gap between Oracle CEO Larry Ellison (net worth: $60 billion), with his private islands, his yacht, etc. and myself, with my cheap rented apartment, ten-year-old bike, and modest savings. In fact, under this model, there’s even a huge gap between Larry Ellison with all of his luxury goods and Berkshire Hathaway CEO Warren Buffett (net worth: $90.6 billion), with his relatively cheap house and restrained tastes.
Pictured: Warren Buffett’s house vs. Larry Ellison’s yacht. The yacht is many, many times larger than the house. Image credits: TEDizen and reivax.
Under the second model, inequality in new worth or salary is all that matters. This is the classic model that gives us the GINI coefficient and “the 1%”. Under this model, Warren Buffett is the very best off, with Larry Ellison close behind. I’m not even in contention.
That is to say, the prevailing narrative around inequality is that it is bad because:
Rich people are able to consume in a way that is frankly bananas and often destructive either to the environment or norms of good governance
Workers cannot afford all basic necessities, or must choose between basic necessities and thinking long term (e.g. by saving for their children’s education or their own retirement)
Despite this focus on consumptive inequality in public rhetoric, our tax system seems to be focused primarily on wealth inequality.
Now, it is true that wealth inequality can often lead to consumptive inequality. Larry Ellison is able to consume to such an obscene degree only because he is so obscenely wealthy. But it is also true that wealth inequality doesn’t necessarily lead to consumptive inequality (there are upper middle-class people who have larger houses than Warren Buffett) and that it might be useful to structure our tax policy and other instruments of political economy such that there was a serious incentive for wealth inequality not to lead to consumptive inequality.
What I mean is: it’s unlikely that we’re going to reach a widely held consensus that wealth is immoral (or at what level it becomes immoral). But I think we already have a widely held consensus that given the existence of wealth, it is better to wield it like Mr. Buffett than like Mr. Ellison.
To a certain extent, we already acknowledge this. In Canada, there are substantial tax advantages to investing up 18% of your yearly earnings (below a certain point) and giving up to 75% of your income to charity. That said, we continue to bafflingly tax many productive uses of wealth (like investing), while refusing to adequately tax many frivolous or actively destructive uses of wealth (large cars, private jets, private yachts, influencing the political process, etc.).
Many people, myself included, find the idea of large amounts of wealth fundamentally immoral. Still, I’d rather tax the conspicuous and pointless use of wealth than wealth itself, because there are many people motivated to do great things (like curate all of the world’s information and put it at our fingertips) because of desire for wealth.
I’m enough of a post-modernist to worry that any attempt to create a metric of “social value” will further disenfranchise people who have already been subject to systemic discrimination and fail to reflect the tastes of anyone younger than 35 (I just can’t believe that a bunch of politicians would get together and agree that anyone creates social value or deserves compensation for e.g. cosplay, even though I know many people who find it immensely valuable and empowering).
That’s the motivation. Now for the practice. What would a tax plan optimized to punish spurious consumption while maintaining economic growth even look like? Luckily Scott Sumner has provided an outline, the cleverness of which I’d like to explain.
No income tax
When you take money from people as taxes, then give it back to them regardless of how hard they work, you discourage work. It turns out that this effect is rather large, such that the higher income taxes are, the more you discourage people from working. People working is a necessary prerequisite for economic growth and I view economic growth as largely positive (in that it is very good at engendering happiness and stability, as well as guaranteeing those of us currently working the possibility of retiring one day and generating revenues for a social safety net) and therefore think we should try and tax in a way that doesn’t discourage this.
No corporate tax
Another important component of economic growth is investment. We can imagine a hypothetical economy where absolutely everything that is produced is consumed, such that much is made, but nothing ever really changes. The products available this year will be the products available next year, at the same price and made in the same factory, with any worn-down equipment replaced, but no additional equipment purchased.
Obviously, this is a toy example. But if you’ve bought a product this year that didn’t exist last year, or noticed the cost of something you regularly buy fall, you’ve reaped the rewards of investment. We need people to deliberately set aside some of the production they’re entitled too via possession of money so that it can instead be used to improve the process of production.
Corporate taxes discourage this by making investment less attractive. In fact, they actively encourage consumptive inequality, by making consumption artificially cheaper than investment. This is the exact opposite of what we should be aiming for!
Now, I know that corporate taxes feel very satisfying. Corporations make a lot of money (although probably less than you think!) and it feels right and proper to divert some of that for public usage. But there are better ways of diverting that money (some of which I’ll talk about below) that manage to fill the public coffers without incentivizing behaviour even worse than profit seeking (like bloated executive pay; taxing corporate income makes paying the CEO a lot artificially cheap). Corporate taxes also hurt normal people in a variety of ways – like making saving for retirement harder.
No inheritance tax
This is another example of artificially making consumption more attractive. Look at it this way: you (a hypothetical you who is very wealthy) can buy a yacht now, use it for a while, loan it to your kids, them have them inherit it when it’s depreciated significantly, reducing the tax they have to pay on it. Or you can invest so that you can give your children a lot of money. Most rich people aren’t going to want to leave nothing behind for their children. Therefore, we shouldn’t penalize people who are going to use the money for non-frivolous things in the interim.
A VAT (with rebates or exemptions)
A VAT, or value added tax, is a tax on consumption; you pay it whenever you buy something from a store or online. A “value-added” tax differs from a simple sales tax in that it allows for tax paid to suppliers to be deducted from taxes owed. This is necessary so that complex, multi-step products (like computers) don’t artificially cost more than more simple products (like wood).
Scott Sumner suggests that a VAT can be easily made free for low-income folks by automatically refunding the VAT rate times the national poverty income to everyone each year. This is nice and simple and has low administrative overhead (another key concern for a taxation system; every dollar spent paying people to oversee the process of collecting taxes is a dollar that can’t be spent on social programs).
An alternative, currently favoured in Canada, is to avoid taxing essentials (like unprepared food). This means that people who spend a large portion of their money on food are taxed at a lower overall rate than people who spend more money on non-essential products.
A steeply progressive payroll tax
If income inequality is something you want to avoid, I’d argue that a progressive payroll tax is more effective than almost any other measure. This makes companies directly pay the government if they wish to have high wage workers and makes it more politically palatable to raise taxes on upper brackets, even to the point of multiples of the paid salary.
While this may seem identical to taxing income, the psychological effect is rather different, which is important when dealing with real people, not perfectly rational economics automata. Payroll taxes also make tax avoidance via incorporating impossible (as all corporate income, including dividends after subtracting investment would be subject to the payroll tax) and makes it easy to really punish companies for out of control executive compensation. Under a payroll tax system, you can quite easily impose a 1000% tax on executive compensation over $1,000,000. It’s pretty hard to justify a CEO salary of $10,000,000 when it’s costing investors more than a hundred million dollars!
Property taxes tend to be flat, which makes them less effective at discouraging conspicuous consumption (e.g. 4,500 square foot suburban McMansions). If property taxes sharply ramped up with house value or size, families that chose more appropriately sized homes (or could only afford appropriately sized home) would be taxed at lower rates than their profligate neighbours. Given that developments with smaller houses are either higher density (which makes urban services cheaper and cars less necessary) or have more greenspace (which is good from an environmental perspective, especially in flood prone areas), it’s especially useful to convince people to live in smaller houses.
This would be best combined with laxer zoning. For example, minimum house sizes have long been a tool used in “nice” suburbs, to deliberately price out anyone who doesn’t have a high income. Zoning houses for single family use was also seized upon as a way to keep Asian immigrants out of white neighbourhoods (as a combination of culture and finances made them more likely to have more than just a single nuclear family in a dwelling). Lax zoning would allow for flexibility in housing size and punitive taxes on large houses would drive demand for more environmentally sustainable houses and higher density living.
A carbon tax
Carbon is what economists call a negative externality. It’s a thing we produce that negatively affects other people without a mechanism for us to naturally pay the cost of this inflicted disutility. When we tax a negative externality, we stop over-consumption  of things that produce that externality. In the specific case of taxing carbon, we can use this tax to very quickly bring emissions in line with the emissions necessary to avoid catastrophic warming.
This comes from a separate post by Scott Sumner, but I think it’s a good enough idea to mention here. It should be possible to come up with a relatively small list of items that are mostly positional – that is to say that the vast majority of their cost is for the sake of being expensive (and therefore showing how wealthy and important the possessor is), not for providing increasing quality. To illustrate: there is a significant gap in functionality between a $3,000 beater car and a $30,000 new car, less of a gap between a $30,000 car and a $300,000 car and even less of a gap between the $300,000 car and a $3,000,000 car; the $300,000 car is largely positional, the $3,000,000 car almost wholly so. To these we could add items that are almost purely for luxury, like 100+ foot yachts.
It’s necessary to keep this list small and focus on truly grotesque expenditures, lest we turn into a society of petty moralizers. There’s certainly a perspective (normally held by people rather older than the participants) in which spending money on cosplay or anime merchandise is frivolous, but if it is, it’s the sort of harmless frivolity equivalent to spending an extra dollar on coffee. I am in general in favour of letting people spend money on things I consider frivolous, because I know many of the things I spend money on (and enjoy) are in turn viewed as frivolous by others . However, I think there comes a point when it’s hard to accuse anyone of petty moralizing and I think that point is probably around enough money to prevent dozens of deaths from malaria (i.e. $100,000+) .
Besides, there’s the fact that making positional goods more expensive via taxation just makes them more exclusive. If anything, a strong levy on luxury goods may make them more desirable to some.
It is true that I care about the economy in a way that I never cared about it before. I care that we have sustainable growth that enriches us all. I care about the stock market making gains, because I’ve realized just how much of the stock market is people’s pensions. I care about start-ups forming to meet brand new needs, even when the previous generation views them as frivolous. I care about human flourishing and I now believe that requires us to have a functioning economic system.
A lot of how we do tax policy is bad. It’s based on making us feel good, not on encouraging good behaviour and avoiding weird economic distortions. It encourages the worst excesses of wealth and it’s too easy to avoid.
What I’ve outlined here is a series of small taxes, small enough to make each not worth the effort to avoid, that together can easily collect enough revenue to ensure a redistributive state. They have the advantage of cutting particularly hard against conspicuous consumption and protecting the planet from unchecked global warming. I sincerely believe that if more people gave them honest consideration, they would advocate for them too and together we could build a fairer, more effective taxation system.
 A minimum wage can make it impossible to have Pareto optimal distributions – distributions where you cannot make anyone better off without making someone else worse off. Here’s a trivial example: imagine a company with two overworked employees, each of whom make $15/hour. The employees are working more than they particularly want to, because there’s too much work for the two of them to complete. Unfortunately, the company can only afford to pay an additional $7/hour and the minimum wage is $14/hour. If the company could hire someone without much work experience for $7/hour everyone would be better off.
The existing employees would be less overworked and happier. The new employee would be making money. The company could probably do slightly more business.
Wage subsidies would allow for the Pareto optimal distribution to exist while also paying the third worker a living wage. ^
 Over-consumption here means: “using more of it than you would if you have to properly compensate people for their disutility”, not the more commonly used definition that merely means “consuming more than is sustainable”.
An illustration of the difference: In a world with very expensive carbon capture systems that mitigate global warming and are paid for via flat taxes, it would be possible to be over-consuming gasoline in the economics sense, in that if you were paying a share of the carbon capture costs commensurate with your use, you’d use less carbon, while not consuming an amount of gasoline liable to lead to environmental catastrophe, even if everyone consumed a similar amount. ^
 For example, I spent six times as much as the median Canadian on books last year, despite the fact that there’s a perfectly good library less than five minutes from my house. I’m not particularly proud of this, but it made me happy. ^
 I am aware of the common rejoinder to this sort of thinking, which is basically summed up as “sure, a sports car doesn’t directly feed anyone, but it does feed the workers who made it”. It is certainly true that heavily taxing luxury items will probably put some people out of work in the industries that make them. But as Scott Sumner points out, it is impossible to meaningfully fix consumptive inequality without hurting jobs that produce things for rich people. If you aren’t hurting these industries, you have not meaningfully changed consumptive inequality!
Note also that if we’re properly redistributing money from taxes that affect rich people, we’re not going to destroy jobs, just shift them to sectors that don’t primarily serve rich people. ^
Epistemic Status: Full of sweeping generalizations because I don’t want to make it 10x longer by properly unpacking all the underlying complexity.
[9 minute read]
In 2006, Dr. Atul Gawande wrote an article in The New Yorker about maternal care entitled “How Childbirth Went Industrial“. It’s an excellent piece from an author who consistently produces excellent pieces. In it, Gawande charts the rise of the C-section, from its origin as technique so dangerous it was considered tantamount to murder (and consequently banned on living mothers), to its current place as one of the most common surgical procedures carried out in North American hospitals.
The C-section – and epidurals and induced labour – have become so common because obstetrics has become ruthlessly focused on maximizing the Apgar score of newborns. Along the way, the field ditched forceps (possibly better for the mother yet tricky to use or teach), a range of maneuvers for manually freeing trapped babies (likewise difficult), and general anesthetic (genuinely bad for infants, or at least for the Apgar scores of infants).
The C-section has taken the place of much of the specialized knowledge of obstetrics of old, not the least because it is easy to teach and easy for even relatively less skilled doctors to get right. When Gawande wrote the article, there was debate about offering women in their 39th week of pregnancy C-sections as an alternative to waiting for labour. Based on the stats, this hasn’t quite come to pass, but C-sections have become slightly more prevalent since the article was written.
I noticed two laments in the piece. First, Gawande wonders at the consequences of such an essential aspect of the human experience being increasingly (and based off of the studies that show forceps are just as good as C-sections, arguably unnecessarily) medicalized. Second, there’s a sense throughout the article that difficult and hard-won knowledge is being lost.
The question facing obstetrics was this: Is medicine a craft or an industry? If medicine is a craft, then you focus on teaching obstetricians to acquire a set of artisanal skills—the Woods corkscrew maneuver for the baby with a shoulder stuck, the Lovset maneuver for the breech baby, the feel of a forceps for a baby whose head is too big. You do research to find new techniques. You accept that things will not always work out in everyone’s hands.
But if medicine is an industry, responsible for the safest possible delivery of millions of babies each year, then the focus shifts. You seek reliability. You begin to wonder whether forty-two thousand obstetricians in the U.S. could really master all these techniques. You notice the steady reports of terrible forceps injuries to babies and mothers, despite the training that clinicians have received. After Apgar, obstetricians decided that they needed a simpler, more predictable way to intervene when a laboring mother ran into trouble. They found it in the Cesarean section.
Medicine would not be the first industry to industrialize. The quasi-mythical King Ludd that gave us the phrase “Luddite” was said to be a weaver, put out of business by the improved mechanical knitting machines. English programs turn out thousands of writers every year, all with an excellent technical command of the English language, but most with none of the emotive power of Gawande. Following the rules is good enough when you’re writing for a corporation that fears to offend, or for technical clarity. But the best writers don’t just know how to follow the rules. They know how and when to break them.
If Gawande was a student of military history, he’d have another metaphor for what is happening to medicine: warriors are being replaced by soldiers.
If you ever find yourself in possession of a spare hour and feel like being lectured breathlessly by a wide-eyed enthusiast, find your local military history buff (you can identify them by their collection of swords or antique guns) and ask them whether there’s any difference between soldiers and warriors.
You can go do this now, or I can fill in, having given this lecture many times myself.
Imagine your favourite (or least favourite) empire from history. You don’t get yourself an empire by collecting bottle caps. To create one, you need some kind of army. To staff your army, you have two options. Warriors, or soldiers.
(Of course, this choice isn’t made just by empires. Their neighbours must necessarily face the same conundrum.)
Warriors are the heroes of movies. They were almost always the product of training that starts at a young age and more often than not were members a special caste. Think medieval European Knights, Japanese Samurai, or the Hashashin fida’i. Warriors were notable for their eponymous mastery of war. A knight was expected to understand strategy and tactics, riding, shooting, fighting (both on foot and mounted), and wrestling. Warriors wanted to live up to their warrior ethos, which normally emphasized certain virtues, like courage and mercy (to other warriors, not to any common peasant drafted to fight them).
Soldiers were whichever conscripts or volunteers someone could get into a reasonable standard of military order. They knew only what they needed to complete their duties: perhaps one or two simple weapons, how to march in formation, how to cook, and how to repair some of their equipment . Soldiers just wanted to make it through the next battle alive. In service to this, they were often brutally efficient in everything they did. Fighting wasn’t an art to them – it was simple butchery and the simpler and quicker the better. Classic examples of soldiers are the Roman Legionaries, Greek Hoplites, and Napoleon’s Grande Armée.
The techniques that soldiers learned were simple because they needed to be easy to teach to ignorant peasants on a mass scale in a short time. Warriors had their whole childhood for elaborate training.
(Or at least, that’s the standard line. In practice, things were never quite as clear cut as that – veteran soldiers might have been as skilled as any warrior, for example. The general point remains though; one on one, you would always have bet on a warrior over a soldier.)
But when you talk about armies, a funny thing happens. Soldiers dominated . Individually, they might have been kind of crap at what they did. Taken as a whole though, they were well-coordinated. They looked out for each other. They fought as a team. They didn’t foolishly break ranks, or charge headlong into the enemy. When Germanic warriors came up against Roman soldiers, they were efficiently butchered. The Germans went into battle looking for honour and perhaps a glorious death. The Romans happily gave them the latter and so lived (mostly) to collect their pensions. Whichever empire you thought about above almost certainly employed soldiers, not warriors.
It turns out that discipline and common purpose have counted for rather a lot more in military history than simple strength of arms. Of this particular point, I can think of no better example than the rebellion that followed the Meiji restoration. The few rebel samurai, wonderfully trained and unholy terrors in single combat were easily slaughtered by the Imperial conscripts, who knew little more than which side of a musket to point at the enemy.
The very fact that the samurai didn’t embrace the firing line is a point against them. Their warrior code, which esteemed individual skill, left them no room to adopt this devastating new technology. And no one could command them to take it up, because they were mostly prima donnas where their honour was concerned.
I don’t want to be too hard on warriors. They were actually an efficient solution to the problem of national defence if a population was small and largely agrarian, lacked political cohesion or logistical ability, or was otherwise incapable of supporting a large army. Under these circumstances, polities could not afford to keep a large population under arms at all times. This gave them several choices: they could rely on temporary levies, who would be largely untrained. They could have a large professional army that paid for itself largely through raiding, or they could have a small, elite cadre of professional warriors.
All of these strategies had disadvantages. Levies tended to have very brittle morale, and calling up a large proportion of a population makes even a successfully prosecuted war economically devastating. Raiding tends to make your neighbours really hate you, leading to more conflicts. It can also be very bad for discipline and can backfire on your own population in lean times. Professional warriors will always be dwarfed in numbers by opponents using any other strategy.
Historically, it was never as simple as solely using just one strategy (e.g. European knights were augmented with and eventually supplanted by temporary levies), but there was a clear lean towards one strategy or another in most resource-limited historical polities. It took complex cultural technology and a well-differentiated economy to support a large force of full time soldiers and wherever these pre-conditions were lacking, you just had to make do with what you could get .
When conditions suddenly call for a struggle – whether that struggle is against a foreign adversary, to boost profits, or to cure disease, it is useful to look at how many societal resources are thrown at the fight. When resources are scarce, we should expect to see a few brilliant generalists, or many poorly trained conscripts. When resources are thick on the ground, the amount that can be spent on brilliant people is quickly saturated and the benefits of training your conscripts quickly accrue. From one direction or another, you’ll approach the concept of soldiers.
Doctors as soldiers, not as warriors is the concept Gawande is brushing up against in his essay. These new doctors will be more standardized, with less room for individual brilliance, but more affordances for working well in teams. The prima donnas will be banished (as they aren’t good team players, even when they’re brilliant). Dr. Gregory House may have been the model doctor in the Victorian Age, or maybe even in the fifties. But I doubt any hospital would want him now. It may be that this standardization is just the thing we need to overcome persistent medical errors, improve outcomes across the board, and make populations healthier. But I can sympathize with the position that it might be causing us to lose something beautiful.
In software development, where I work, a similar trend can be observed. Start-ups aggressively court ambitious generalists, for whom freedom to build things their way is more important than market rate compensation and is a better incentive than even the lottery that is stock-options. At start-ups, you’re likely to see languages that are “fun” to work with, often dynamically typed, even though these languages are often considered less inherently comprehensible than their more “enterprise-friendly” statically typed brethren.
It’s with languages like Java (or its Microsoft clone, C#) and C++ that companies like Google and Amazon build the underlying infrastructure that powers large tracts of the internet. Among the big pure software companies, Facebook is the odd one out for using PHP (and this choice required them to rewrite the code underlying the language from scratch to make it performant enough for their large load).
It’s also at larger companies where team work, design documents, and comprehensibility start to be very important (although there’s room for super-stars at all of the big “tech” companies still; it’s only in companies more removed from tech and therefore outside a lot of the competition for top talent where being a good team player and writing comprehensible code might top brilliance as a qualifier). This isn’t to say that no one hiring for top talent appreciates things like good documentation, or comprehensibility. Merely that it is easy for a culture that esteems individual brilliance to ignore these things are a mark of competence.
Here the logic goes that anyone smart enough for the job will be smart enough to untangle the code of their predecessors. As anyone who’s been involved in the untangling can tell you, there’s a big difference between “smart enough to untangle this mess” and “inclined to wade through this genius’s spaghetti code to get to the part that needs fixing”.
No doubt there exist countless other examples in fields I know nothing about.
The point of gathering all these examples and shoving them into my metaphor is this: I think there exist two important transitions that can occur when a society needs to focus a lot of energy on a problem. The transition from conscripts to soldiers isn’t very interesting, as it’s basically the outcome of a process of continuous improvement.
But the transition from warriors to soldiers is. It’s amazing that we can often get better results by replacing a few highly skilled generalists who apply a lot of hard fought decision making, with a veritable army of less well trained, but highly regimented and organized specialists. It’s a powerful testament to the usefulness of group intelligence. Of course, sometimes (e.g. Google, or the Mongols) you get both, but these are rare happy accidents.
Being able to understand where this transition is occurring helps you understand where we’re putting effort. Understanding when it’s happening within your own sphere of influence can help you weather it.
Also note that this transition doesn’t only go in one direction. As manufacturing becomes less and less prevalent in North America, we may return to the distant past, when manufacturing stuff was only undertaken by very skilled artisans making unique objects.
 Note the past tense throughout much of this essay; when I speak about soldiers and warriors, I’m referring only to times before the 1900s. I know comparatively little about how modern armies are set up. ^
 Best of all were the Mongols, who combined the lifelong training of warriors with the discipline and organization of soldiers. When Mongols clashed with European knights in Hungary, their “dishonourable” tactics (feints, followed by feigned retreats and skirmishing) easily took the day. This was all possible through a system of signal flags that allowed Subutai to command the whole battle from a promontory. European leaders were expected to show their bravery by being in the thick of fighting, which gave them no overall control over their lines. ^
 Historically, professional armies with good logistical support could somewhat pay for themselves by expanding an empire, which brought in booty and slaves. This is distinct from raiding (which does not seek to incorporate other territories) and has its own disadvantages (rebellion, over-extension, corruption, massive unemployment among unskilled labourers, etc.). ^
Previously I described regulation as a regressive tax. It may not kill jobs per se, but it certainly shifts them towards people with university degrees, largely at the expense of those without. I’m beginning to rethink that position; I’m increasingly worried that many types of regulation are actually leading to a net loss of jobs. There remains a paucity of empirical evidence on this subject. Today I’m going to present a (I believe convincing) model of how regulations could kill jobs, but I’d like to remind everyone that models are less important than evidence and should only be the focus of discussion in situations like this, where the evidence is genuinely sparse.
Let’s assume that regulation has no first order effect on jobs. All jobs lost through regulation (and make no mistake, there will be lost jobs) are offset by different jobs in regulatory compliance or the jobs created when the compliance people spend the money they make, etc., on to infinity. So far, this is all fine and dandy.
Talking to members of the local start-up community, I reckon that many small sized hardware start-ups spend the equivalent of an engineer’s salary on regulatory compliance yearly. Instead of a hypothetical engineer (or marketer, or salesperson, etc.), they’re providing a salary to a lawyer, or a technician at the FCC, or some other mid-level bureaucrat.
No matter how well this person does their job, they aren’t creating anything of value. There’s no chance that they’ll come up with or contribute to a revolutionary new product that drives a lot of economic growth and ends up creating dozens, hundreds, or (in very rare cases) thousands of jobs. An engineer could.
There’s obviously many ways that even successful start-ups with all the engineers they need can fail to create jobs on net. They could disrupt an established industry in a way that causes layoffs at the existing participants (although it’s probably fallacious to believe that this will cause net job losses either, given the lump of labour fallacy). Also, something like 60% of start-ups fail. In the case of failure, money from wealthy investors is transferred to other people and I doubt most people care if the beneficiaries are engineers or in compliance.
But discounting all that, I think what this boils down to is: when you’re paying an engineer, there’s a chance that the engineer will invent something that increases productivity and drives productivity growth (leading to cheaper prices and maybe even new industries previously thought impossible). When you pay someone in sales or marketing, you get a chance to get your product in front of customers and see it really take off. When you’re paying for regulatory compliance, you get an often-useless stamp of approval, or have to make expensive changes because some rent-seeking corporation got spurious requirements written into the regulation.
Or the regulatory agency catches a fatal flaw and averts a catastrophe. I’m not saying that never happens. Just that I think it’s much rarer than many people might believe. Seeing the grinding wheels of regulation firsthand has cured me of all my youthful idealistic approval for it. Sometimes consumers need to be protected from out of control profit-seeking, sure. But once you’ve been forced to actually do some regulatory compliance, you start to understand just how much regulation exists to prevent established companies from having to compete against new entrants. This makes everything more expensive and everyone but a few well-connected shareholders worse off.
Regulations has real trade-offs; there are definite goods, but also definite downsides. And now I think the downsides are even worse than I first predicted.