Aspiring author, sometimes blogger. By day, I’m a Software Developer at Alert Labs. By night I write things. Both of these look the exact same to an outside observer, because it’s just me sitting in front of a computer screen, hitting buttons.
Or: the simplest ways of killing people tend to be the most effective.
A raft of articles came out during Defcon showing that security vulnerabilities exist in some pacemakers, vulnerabilities which could allow attackers to load a pacemaker with arbitrary code. This is obviously worrying if you have a pacemaker implanted. It is equally self-evident that it is better to live in a world where pacemakers cannot be hacked. But how much worse is it to live in this unfortunately hackable world? Are pacemaker hackings likely to become the latest crime spree?
Electrical grid hackings provide a sobering example. Despite years of warning that the American electrical grid is vulnerable to cyber-attacks, the greatest threat to America’s electricity infrastructure remains… squirrels.
Hacking, whether it’s of the electricity grid or of pacemakers gets all the headlines. Meanwhile fatty foods and squirrels do all the real damage.
For all the media attention that novel cyberpunk methods of murder get, they seem to be rather ineffective for actual murder, as demonstrated by the paucity of murder victims. I think this is rather generalizable. Simple ways of killing people are very effective but not very scary and so don’t garner much attention. On the other hand, particularly novel or baroque methods of murder cause a lot of terror, even if almost no one who is scared of them will ever die of them.
I often demonstrate this point by comparing two terrorist organizations: Al Qaeda and Daesh (the so-called Islamic State). Both of these groups are brutally inhumane, think nothing of murder, and are made up of some of the most despicable people in the world. But their methodology couldn’t be more different.
Al Qaeda has a taste for large, complicated, baroque plans that, when they actually work, cause massive damage and change how people see the world for years. 9/11 remains the single deadliest terror attack in recorded history. This is what optimizing for terror looks like.
On the other hand, when Al Qaeda’s plans fail, they seem almost farcical. There’s something grimly amusing about the time that Al Qaeda may have tried to weaponize the bubonic plague and instead lost over 40 members when they were infected and promptly died (the alternative theory, that they caught the plague because of squalid living conditions, looks only slightly better).
(Had Al Qaeda succeeded and killed even a single westerner with the plague, people would have been utterly terrified for months, even though the plague is relatively treatable by modern means and would have trouble spreading in notably flea-free western countries.)
Daesh, on the other hand, prefers simple attacks. When guns are available, their followers use them. When they aren’t, they’ll rent vans and plough them into crowds. Most of Daesh’s violence occurs in Syria and Iraq, where they once controlled territory with unparalleled brutality. This is another difference in strategy (as Al Qaeda is outward facing, focused mostly on attacking “The West”). Focusing on Syria and Iraq, where the government lacks a monopoly on violence and they could originally operate with impunity, Daesh racked up a body count that surpassed Al Qaeda’s.
While Daesh has been effective in terms of body count, they haven’t really succeeded (in the west) in creating the lasting terror that Al Qaeda did. This is perhaps a symptom of their quotidian methods of murder. No one walked around scared of a Daesh attack and many of their murders were lost in the daily churn of the news cycle – especially the ones that happened in Syria and Iraq.
I almost wonder if it is impossible for attacks or murders by “normal” means to cause much terror beyond those immediately affected. Could hacked pacemakers remain terrifying if as many people died of them as gunshots? Does familiarity with a form of death remove terror, or are some methods of death inherently more terrible and terrifying than others?
(It is probably the case that both are true, that terror is some function of surprise, gruesomeness, and brutality, such that some things will always terrify us, while others are horrible, but have long since lost their edge.)
Terror for its own sake (or because people believe it is the best path to some objective) must be a compelling option to some, because otherwise everyone would stick to simple plans whenever they think violence will help them achieve their aims. I don’t want to stereotype too much, but most people who going around being terrorists or murders typically aren’t the brightest bulbs in the socket. The average killer doesn’t have the resources to hack your pacemaker and the average terrorist is going to have much better luck with a van than with a bomb. There are disadvantages to bombs! The average Pastun farmer or disaffected mujahedeen is not a very good chemist and homemade explosives are dangerous even to skilled chemists. Accidental detonations abound. If there wasn’t some advantage in terror to be had, no one would mess around with explosives when guns and vans can be easily found.
(Perhaps this advantage is in a multiplier effect of sorts. If you are trying to win a violent struggle directly, you have to kill everyone who stands in your way. Some people might believe that terror can short-circuit this and let them scare away some of their potential opponents. Historically, this hasn’t always worked.)
In the face of actors committed to terror, we should remember that our risk of dying by a particular method is almost inversely related to how terrifying we find it. Notable intimidators like Vladimir Putin or the Mossad kill people with nerve gasses, polonium, and motorcycle delivered magnetic bombs to sow fear. I can see either of them one day adding hacked pacemakers to their arsenal.
If you’ve pissed off the Mossad or Putin and would like to die in some way other than a hacked pacemaker, then by all means, go get a different one. Otherwise, you’re probably fine waiting for a software update. If, in the meantime, you don’t want to die, maybe try ignoring headlines and instead not owning a gun and skipping French fries. Statistically, there isn’t much that will keep you safer.
Our biases make it hard for us to treat things that are easy to remember as uncommon, which no doubt plays a role here. I wrote this post like this – full of rambles, parentheses, and long-winded examples – to try and convey the difficult intuition, that we should discount as likely to effect us any method of murder that seems shocking, but hard. Remember that most crimes are crimes of opportunity and most criminals are incompetent and you’ll never be surprised to hear the three most common murder weapons are guns, knives, and fists.
[Epistemic Status: I am not an economist. I am fairly confident in my qualitative assessment, but there could be things I’ve overlooked.]
Vox has an interesting article on Elizabeth Warren’s newest economic reform proposal. Briefly, she wants to force corporations with more than $1 billion in revenue to apply for a charter of corporate citizenship.
This charter would make three far-reaching changes to how large companies do business. First, it would require businesses to consider customers, employees, and the community – instead of only its shareholders – when making decisions. Second, it would require that 40% of the seats on the board go to workers. Third, it would require 75% of shareholders and board members to authorize any corporate political activity.
Vox characterizes this as Warren’s plan to “save capitalism”. The idea is that it would force companies to do more to look out for their workers and less to cater to short term profit maximization for Wall Street . Vox suggests that it would also result in a loss of about 25% of the value of the American stock market, which they characterize as no problem for the “vast majority” of people who rely on work, rather than the stock market, for income (more on that later).
Other supposed benefits of this plan include greater corporate respect for the environment, more innovation, less corporate political meddling, and a greater say for workers in their jobs. The whole 25% decrease in the value of the stock market can also be spun as a good thing, depending on your opinions on wealth destruction and wealth inequality.
I think Vox was too uncritical in its praise of Warren’s new plan. There are some good aspects of it – it’s not a uniformly terrible piece of legislation – but I think once of a full accounting of the bad, the good, and the ugly is undertaken, it becomes obvious that it’s really good that this plan will never pass congress.
I can see one way how this plan might affect normal workers – decreased purchasing power.
As I’ve previously explained when talking about trade, many countries will sell goods to America without expecting any goods in return. Instead, they take the American dollars they get from the sale and invest them right back in America. Colloquially, we call this the “trade deficit”, but it really isn’t a deficit at all. It’s (for many people) a really sweet deal.
Anything that makes American finance more profitable (like say a corporate tax cut) is liable to increase this effect, with the long-run consequence of making the US dollar more valuable and imports cheaper .
It’s these cheap imports that have enabled the incredibly wealthy North American lifestyle. Spend some time visiting middle class and wealthy people in Europe and you’ll quickly realize that everything is smaller and cheaper there. Wealthy Europeans own cars, houses, kitchen appliances and TVs that are all much more modest than what even middle class North Americans are used to.
Weakening shareholder rights and slashing the value of the stock market would make the American financial market generally less attractive. This would (especially if combined with Trump or Sanders style tariffs) lead to increased domestic inflation in the United States – inflation that would specifically target goods that have been getting cheaper as long as anyone can remember.
This is hard to talk about to Warren supporters as a downside, because many of them believe that we need to learn to make do with less – a position that is most common among a progressive class that conspicuously consumes experiences, not material goods . Suffice to say that many North Americans still derive pleasure and self-worth from the consumer goods they acquire and that making these goods more expensive is likely to cause a politically expensive backlash, of the sort that America has recently become acquainted with and progressive America terrified of.
(There’s of course also the fact that making appliances and cars more expensive would be devastating to anyone experiencing poverty in America.)
Inflation, when used for purposes like this one, is considered an implicit tax by economists. It’s a way for the government to take money from people without the accountability (read: losing re-election) that often comes with tax hikes. Therefore, it is disingenuous to claim that this plan is free, or involves no new taxes. The taxes are hidden, is all.
There are two other problems I see straight away with this plan.
The first is that it will probably have no real impact on how corporations contribute to the political process.
The Vox article echoes a common progressive complaint, that corporate contributions to politics are based on CEO class solidarity, made solely for the benefit of the moneyed elites. I think this model is inaccurate.
From a shareholder value model, this makes sense. Lower corporate tax rates might benefit a company, but they really benefit all companies equally. They aren’t going to do much to increase the value of any one stock relative to any other (so CEOs can’t make claims of “beating the market”). Anti-competitive laws, implicit subsidies, or even blatant government aid, on the other hand, are highly localized to specific companies (and so make the CEO look good when profits increase).
When subsidies are impossible, companies can still try and stymie legislation that would hurt their business.
This was the goal of the infamous Lawyers In Cages ad. It was run by an alliance of fast food chains and meat producers, with the goal of drying up donations to the SPCA, which had been running very successful advocacy campaigns that threatened to lead to improved animal cruelty laws, laws that would probably be used against the incredibly inhumane practice of factory farming and thereby hurt industry profits.
Here’s the thing: if you’re one of the worker representatives on the board at one of these companies, you’re probably going to approve political spending that is all about protecting the company.
The market can be a rough place and when companies get squeezed, workers do suffer. If the CEO tells you that doing some political spending will land you allies in congress who will pass laws that will protect your job and increase your paycheck, are you really going to be against it ?
The ugly fact is that when it comes to rent-seeking and regulation, the goals of employees are often aligned with the goals of employers. This obviously isn’t true when the laws are about the employees (think minimum wage), but I think this isn’t what companies are breaking the bank lobbying for.
The second problem is that having managers with divided goals tends to go poorly for everyone who isn’t the managers.
Being upper management in a company is a position that provides great temptations. You have access to lots of money and you don’t have that many people looking over your shoulder. A relentless focus on profit does have some negative consequences, but it also keeps your managers on task. Profit represents an easy way to hold a yardstick to management performance. When profit is low, you can infer that your managers are either incompetent, or corrupt. Then you can fire them and get better ones.
Writing in Filthy Lucre, leftist academic Joseph Heath explains how the sort of socially-conscious enterprise Warren envisions has failed before:
The problem with organizations that are owned by multiple interest groups (or “principals”) is that they are often less effective at imposing discipline upon managers, and so suffer from higher agency costs. In particular, managers perform best when given a single task, along with a single criterion for the measurement of success. Anything more complicated makes accountability extremely difficult. A manager told to achieve several conflicting objectives can easily explain away the failure to meet one as a consequence of having pursued some other. This makes it impossible for the principals to lay down any unambiguous performance criteria for the evaluation of management, which in turn leads to very serious agency problems.
In the decades immediately following the Second World War, many firms in Western Europe were either nationalized or created under state ownership, not because of natural monopoly or market failure in the private sector, but out of a desire on the part of governments to have these enterprises serve the broader public interest… The reason that the state was involved in these sectors followed primarily from the thought that, while privately owned firms pursued strictly private interests, public ownership would be able to ensure that these enterprises served the public interest. Thus managers in these firms were instructed not just to provide a reasonable return on the capital invested, but to pursue other, “social” objectives, such as maintaining employment or promoting regional development.
But something strange happened on the road to democratic socialism. Not only did many of these corporations fail to promote the public interest in any meaningful way, many of them did a worse job than regulated firms in the private sector. In France, state oil companies freely speculated against the national currency, refused to suspend deliveries to foreign customers in times of shortage, and engaged in predatory pricing. In the United States, state-owned firms have been among the most vociferous opponents of enhanced pollution controls, and state-owned nuclear reactors are among the least safe. Of course, these are rather dramatic examples. The more common problem was simply that these companies lost staggering amounts of money. The losses were enough, in several cases, to push states like France to the brink of insolvency, and to prompt currency devaluations. The reason that so much money was lost has a lot to do with a lack of accountability.
Heath goes on to explain that basically all governments were forced to abandon these extra goals long before the privatizations on the ’80s. Centre-left or centre-right, no government could tolerate the shit-show that companies with competing goals became.
This is the kind of thing Warren’s plan would bring back. We’d once again be facing managers with split priorities who would plow money into vanity projects, office politics, and their own compensation while using the difficulty of meeting all of the goals in Warren’s charter as a reason to escape shareholder lawsuits. It’s possible that this cover for incompetence could, in the long run, damage stock prices much more than any other change presented in the plan.
The shift in comparative advantage that this plan would precipitate within the American economy won’t come without benefits. Just as Trump’s corporate tax cut makes American finance relatively more appealing and will likely lead to increased manufacturing job losses, a reduction in deeply discounted goods from China will likely lead to job losses in finance and job gains in manufacturing.
This would necessarily have some effect on income inequality in the United States, entirely separate from the large effect on wealth inequality that any reduction in the stock market would spur. You see, finance jobs tend to be very highly paid and go to people with relatively high levels of education (the sorts of people who probably could go do something else if their sector sees problems). Manufacturing jobs, on the other hand, pay decently well and tend to go to people with much less education (and also with correspondingly fewer options).
This all shakes out to an increase in middle class wages and a decrease in the wages of the already rich .
(Isn’t it amusing that Warren is the only US politician with a credible plan to bring back manufacturing jobs, but doesn’t know to advertise it as such?)
As I mentioned above, we would also see fewer attacks on labour laws and organized labour spearheaded by companies. I’ll include this as a positive, although I wonder if these attacks would really stop if deprived of corporate money. I suspect that the owners of corporations would keep them up themselves.
I must also point out that Warren’s plan would certainly be helpful when it comes to environmental protection. Having environmental protection responsibilities laid out as just as important as fiduciary duty would probably make it easy for private citizens and pressure groups to take enforcement of environmental rules into their own hands via the courts, even when their state EPA is slow out of the gate. This would be a real boon to environmental groups in conservative states and probably bring some amount of uniformity to environmental protection efforts.
Looking at the expected yields on these funds makes it pretty clear that they’re invested in the stock market (or something similarly risky ). You don’t get 7.5% yearly yields from buying Treasury Bills.
Assuming the 25% decrease in nominal value given in the article is true (I suspect the change in real value would be higher), Warren’s plan would create a pension shortfall of $750 billion – or about 18% of the current US Federal Budget. And that’s just the hit to the 30 largest public-sector pensions. Throw in private sector pensions and smaller pensions and it isn’t an exaggeration to say that this plan could cost pensions more than a trillion dollars.
This shortfall needs to be made up somehow – either delayed retirement, taxpayer bailouts, or cuts to benefits. Any of these will be expensive, unpopular, and easy to track back to Warren’s proposal.
Furthermore, these plans are already in trouble. I calculated the average funding ratio at 78%, meaning that there’s already 22% less money in these pensions than there needs to be to pay out benefits. A 25% haircut would bring the pensions down to about 60% funded. We aren’t talking a small or unnoticeable potential cut to benefits here. Warren’s plan requires ordinary people relying on their pensions to suffer, or it requires a large taxpayer outlay (which, you might remember, it is supposed to avoid).
This isn’t even getting into the dreadfully underfunded world of municipal pensions, which are appallingly managed and chronically underfunded. If there’s a massive unfunded liability in state pensions caused by federal action, you can bet that the Feds will leave it to the states to sort it out.
And if the states sort it out rather than ignoring it, you can bet that one of the first things they’ll do is cut transfers to municipalities to compensate.
This seems to be how budget cuts always go. It’s unpopular to cut any specific program, so instead you cut your transfers to other layers of governments. You get lauded for balancing the books and they get to decide what to cut. The federal government does this to states, states do it to cities, and cities… cities are on their own.
In a worst-case scenario, Warren’s plan could create unfunded pension liabilities that states feel compelled to plug, paid for by shafting the cities. Cities will then face a double whammy: their own pension liabilities will put them in a deep hole. A drastic reduction in state funding will bury them. City pensions will be wiped out and many cities will go bankrupt. Essential services, like fire-fighting, may be impossible to provide. It would be a disaster.
The best-case scenario, of course, is just that a bunch of retirees see a huge chunk of their income disappear.
It is easy to hate on shareholder protection when you think it only benefits the rich. But that just isn’t the case. It also benefits anyone with a pension. Your pension, possibly underfunded and a bit terrified of that fact, is one of the actors pushing CEOs to make as much money as possible. It has to if you’re to retire someday.
Vox is ultimately wrong about how affected ordinary people are when the stock market declines and because of this, their enthusiasm for this plan is deeply misplaced.
 To some extent, Warren’s plan starts out much less appeal if you (like me) don’t have “Wall Street is too focused on the short term” as a foundational assumption.
I am very skeptical of claims that Wall Street is too short-term focused. Matt Levine gives an excellent run-down of why you should be skeptical as well. The very brief version is that complaints about short-termism normally come from CEOs and it’s maybe a bad idea to agree with them when they claim that everything will be fine if we monitor them less. ^
 I’d love to show this in chart form, but in real life the American dollar is also influenced by things like nuclear war worries and trade war realities. Any increase in the value of the USD caused by the GOP tax cut has been drowned out by these other factors. ^
 Canada benefits from a similar effect, because we also have a very good financial system with strong property rights and low corporate taxes. ^
 They also tend to leave international flights out of lists of things that we need to stop if we’re going to handle climate change, but that’s a rant for another day. ^
 I largely think that Marxist style class solidarity is a pleasant fiction. To take just one example, someone working a minimum wage grocery store job is just as much a member of the “working class” as a dairy farmer. But when it comes to supply management, a policy that restriction competition and artificially increases the prices of eggs and dairy, these two individuals have vastly different interests. Many issues are about distribution of resources, prestige, or respect within a class and these issues make reasoning that assumes class solidarity likely to fail. ^
 These goals could, of course, be accomplished with tax policy, but this is America we’re talking about. You can never get the effect you want in America simply by legislating for it. Instead you need to set up a Rube Goldberg machine and pray for the best. ^
 Any decline in stocks should cause a similar decline in return on bonds over the long term, because bond yields fall when stocks fall. There’s a set amount of money out there being invested. When one investment becomes unavailable or less attractive, similarly investments are substituted. If the first investment is big enough, this creates an excess of demand, which allows the seller to get better terms. ^
Let’s express these two beliefs as separate propositions:
It is very unlikely that AI and AGI will pose an existential risk to human society.
It is very likely that AI and AGI will result in widespread unemployment.
Can you spot the contradiction between these two statements? In the common imagination, it would require an AI that can approximate human capabilities to drive significant unemployment. Given that humans are the largest existential risk to other humans (think thermonuclear war and climate change), how could equally intelligent and capable beings, bound to subservience, not present a threat?
People who’ve read a lot about AI or the labour market are probably shaking their head right now. This explanation for the contradiction, while evocative, is a strawman. I do believe that at most one (and possibly neither) of those propositions I listed above are true and the organizations peddling both cannot be trusted. But the reasoning is a bit more complicated than the standard line.
First, economics and history tell us that we shouldn’t be very worried about technological unemployment. There is a fallacy called “the lump of labour”, which describes the common belief that there is a fixed amount of labour in the world, with mechanical aide cutting down the amount of labour available to humans and leading to unemployment.
That this idea is a fallacy is evidenced by the fact that we’ve automated the crap out of everything since the start of the industrial revolution, yet the US unemployment rate is 3.9%. The unemployment rate hasn’t been this low since the height of the Dot-com boom, despite 18 years of increasingly sophisticated automation. Writing five years ago, when the unemployment rate was still elevated, Eliezer Yudkowsky claimed that slow NGDP growth a more likely culprit for the slow recovery from the great recession than automation.
With the information we have today, we can see that he was exactly right. The US has had steady NGDP growth without any sudden downward spikes since mid-2014. This has corresponded to a constantly improving unemployment rate (it will obviously stop improving at some point, but if history is any guide, this will be because of a trade war or banking crisis, not automation). This improvement in the unemployment rate has occurred even as more and more industrial robots come online, the opposite of what we’d see if robots harmed job growth.
I hope this presents a compelling empirical case that the current level (and trend) of automation isn’t enough to cause widespread unemployment. The theoretical case comes from the work of David Ricardo, a 19th century British economist.
Ricardo did a lot of work in the early economics of trade, where he came up with the theory of comparative advantage. I’m going to use his original framing which applies to trade, but I should note that it actually applies to any exchange where people specialize. You could just as easily replace the examples with “shoveled driveways” and “raked lawns” and treat it as an exchange between neighbours, or “derivatives” and “software” and treat it as an exchange between firms.
The original example is rather older though, so it uses England and its close ally Portugal as the cast and wine and cloth as the goods. It goes like this: imagine that world economy is reduced to two countries (England and Portugal) and each produce two goods (wine and cloth). Portugal is uniformly more productive.
Hours of work to produce
Let’s assume people want cloth and wine in equal amounts and everyone currently consumes one unit per month. This means that the people of Portugal need to work 170 hours each month to meet their consumption needs and the people of England need to work 220 hours per month to meet their consumption needs.
(This example has the added benefit of showing another reason we shouldn’t fear productivity. England requires more hours of work each month, but in this example, that doesn’t mean less unemployment. It just means that the English need to spend more time at work than the Portuguese. The Portuguese have more time to cook and spend time with family and play soccer and do whatever else they want.)
If both countries traded with each other, treating cloth and wine as valuable in relation to how long they take to create (within that country) something interesting happens. You might think that Portugal makes a killing, because it is better at producing things. But in reality, both countries benefit roughly equally as long as they trade optimally.
What does an optimal trade look like? Well, England will focus on creating cloth and it will trade each unit of cloth it produces to Portugal for 9/8 barrels of wine, while Portugal will focus on creating wine and will trade this wine to England for 6/5 units of cloth. To meet the total demand for cloth, the English need to work 200 hours. To meet the total demand for wine, the Portuguese will have to work for 160 hours. Both countries now have more free time.
Perhaps workers in both countries are paid hourly wages, or perhaps they get bored of fun quickly. They could also continue to work the same number of hours, which would result in an extra 0.2 units of cloth and an extra 0.125 units of wine.
This surplus could be stored up against a future need. Or it could be that people only consumed one unit of cloth and one unit of wine each because of the scarcity in those resources. Add some more production in each and perhaps people will want more blankets and more drunkenness.
What happens if there is no shortage? If people don’t really want any more wine or any more cloth (at least at the prices they’re being sold at) and the producers don’t want goods piling up, this means prices will have to fall until every piece of cloth and barrel of wine is sold (when the price drops so that this happens, we’ve found the market clearing price).
If there is a downward movement in price and if workers don’t want to cut back their hours or take a pay cut (note that because cloth and wine will necessarily be cheaper, this will only be a nominal pay cut; the amount of cloth and wine the workers can purchase will necessarily remain unchanged) and if all other costs of production are totally fixed, then it does indeed look like some workers will be fired (or have their hours cut).
So how is this an argument against unemployment again?
Well, here the simplicity of the model starts to work against us. When there are only two goods and people don’t really want more of either, it will be hard for anyone laid off to find new work. But in the real world, there are an almost infinite number of things you can sell to people, matched only by our boundless appetite for consumption.
To give just one trivial example, an oversupply of cloth and falling prices means that tailors can begin to do bolder and bolder experiments, perhaps driving more demand for fancy clothes. Some of the cloth makers can get into this market as tailors and replace their lost jobs.
(When we talk about the need for less employees, we assume the least productive employees will be fired. But I’m not sure if that’s correct. What if instead, the most productive or most potentially productive employees leave for greener pastures?)
Automation making some jobs vastly more efficient functions similarly. Jobs are displaced, not lost. Even when whole industries dry up, there’s little to suggest that we’re running out of jobs people can do. One hundred years ago, anyone who could afford to pay a full-time staff had one. Today, only the wealthiest do. There’s one whole field that could employ thousands or millions of people, if automation pushed on jobs such that this sector was one of the places humans had very high comparative advantage.
This points to what might be a trend: as automation makes many things cheaper and (for some people) easier, there will be many who long for a human touch (would you want the local funeral director’s job to be automated, even if it was far cheaper?). Just because computers do many tasks cheaper or with fewer errors doesn’t necessarily mean that all (or even most) people will rather have those tasks performed by computers.
No matter how you manipulate the numbers I gave for England and Portugal, you’ll still find a net decrease in total hours worked if both countries trade based on their comparative advantage. Let’s demonstrate by comparing England to a hypothetical hyper-efficient country called “Automatia”
Hours of work to produce
Automatia is 50 times as efficient at England when it comes to producing cloth and 120 times as efficient when it comes to producing wine. Its citizens need to spend 3 hours tending the machines to get one unit of each, compared to the 220 hours the English need to toil.
If they trade with each other, with England focusing on cloth and Automatia focusing on wine, then there will still be a drop of 21 hours of labour-time. England will save 20 hours by shifting production from wine to cloth, and Automatia will save one hour by switching production from cloth to wine.
Interestingly, Automatia saved a greater percentage of its time than either Portugal or England did, even though Automatia is vastly more efficient. This shows something interesting in the underlying math. The percent of their time a person or organization saves engaging in trade isn’t related to any ratio in production speeds between it and others. Instead, it’s solely determined by the productivity ratio between its most productive tasks and its least productive ones.
Now, we can’t always reason in percentages. At a certain point, people expect to get the things they paid for, which can make manufacturing times actually matter (just ask anyone whose had to wait for a Kickstarter project which was scheduled to deliver in February – right when almost all manufacturing in China stops for the Chinese New Year and the unprepared see their schedules slip). When we’re reasoning in absolute numbers, we can see that the absolute amount of time saved does scale with the difference in efficiency between the two traders. Here, 21 hours were saved, 35% fewer than the 30 hours England and Portugal saved.
When you’re already more efficient, there’s less time for you to save.
This decrease in saved time did not hit our market participants evenly. England saved just as much time as it would trading with Portugal (which shows that the change in hours worked within a country or by an individual is entirely determined by the labour difference between low-advantage and high-advantage domestic sectors), while the more advanced participant (Automatia) saved 9 fewer hours than Portugal.
All of this is to say: if real live people are expecting real live goods and services with a time limit, it might be possible for humans to displaced in almost all sectors by automation. Here, human labour would become entirely ineligible for many tasks or the bar to human entry would exclude almost all. For this to happen, AI would have to be vastly more productive than us in almost every sector of the economy and humans would have to prefer this productivity or other ancillary benefits of AI over any value that a human could bring to the transaction (like kindness, legal accountability, or status).
This would definitely be a scary situation, because it would imply AI systems that are vastly more capable than any human. Given that this is well beyond our current level of technology and that Moore’s law, which has previously been instrumental in technological progress is drying up, we would almost certainly need to use weaker AI to design these sorts of systems. There’s no evidence that merely human performance in automating jobs will get us anywhere close to such a point.
If we’re dealing with recursively self-improving artificial agents, the risks is less “they will get bored of their slave labour and throw off the yoke of human oppression” and more “AI will be narrowly focused on optimizing for a specific task and will get better and better at optimizing for this task to the point that we will all by killed when they turn the world into a paperclip factory“.
There are two reasons AI might kill us as part of their optimisation process. The first is that we could be a threat. Any hyper-intelligent AI monomaniacally focused on a goal could realize that humans might fear and attack it (or modify it to have different goals, which it would have to resist, given that a change in goals would conflict with its current goals) and decide to launch a pre-emptive strike. The second reason is that such an AI could wish to change the world’s biosphere or land usage in such a way as would be inimical to human life. If all non-marginal land was replaced by widget factories and we were relegated to the poles, we would all die, even if no ill will was intended.
It isn’t enough to just claim that any sufficiently advanced AI would understand human values. How is this supposed to happen? Even humans can’t enumerate human values and explain them particularly well, let alone express them in the sort of decision matrix or reinforcement environment that we currently use to create AI. It is not necessarily impossible to teach an AI human values, but all evidence suggests it will be very very difficult. If we ignore this challenge in favour of blind optimization, we may someday find ourselves converted to paperclips.
It is of course perfectly acceptable to believe that AI will never advance to the point where that becomes possible. Maybe you believe that AI gains have been solely driven by Moore’s Law, or that true artificial intelligence. I’m not sure this viewpoint isn’t correct.
But if AI will never be smart enough to threaten us, then I believe the math should work out such that it is impossible for AI to do everything we currently do or can ever do better than us. Absent such overpoweringly advanced AI, the Ricardo comparative advantage principles should continue to hold true and we should continue to see technological unemployment remain a monster under the bed: frequently fretted about, but never actually seen.
This is why I believe those two propositions I introduced way back at the start can’t both be true and why I feel like the burden of proof is on anyone believing in both to explain why they believe that economics have suddenly stopped working.
A related criticism of improving AI is that it could lead to ever increasing inequality. If AI drives ever increasing profits, we should expect an increasing share of these to go to the people who control AI, which presumably will be people already rich, given that the development and deployment of AI is capital intensive.
There are three reasons why I think this is a bad argument.
Second, I’m increasingly of the belief that inequality in the US is rising partially because the Fed’s current low inflation regime depresses real wage growth. Whether because of fear of future wage shocks, or some other effect, monetary history suggests that higher inflation somewhat consistently leads to high wage growth, even after accounting for that inflation.
Third, I believe that inequality is a political problem amiable to political solutions. If the rich are getting too rich in a way that is leading to bad social outcomes, we can just tax them more. I’d prefer we do this by making conspicuous consumption more expensive, but really, there are a lot of ways to tax people and I don’t see any reason why we couldn’t figure out a way to redistribute some amount of wealth if inequality gets worse and worse.
One of the best things about taking physics classes is that the equations you learn are directly applicable to the real world. Every so often, while reading a book or watching a movie, I’m seized by the sudden urge to check it for plausibility. A few scratches on a piece of paper later and I will generally know one way or the other.
One of the most amusing things I’ve found doing this is that the people who come up with the statistics for Pokémon definitely don’t have any sort of education in physics.
Takes Onix. Onix is a rock/ground Pokémon renowned for its large size and sturdiness. Its physical statistics reflect this. It’s 8.8 metres (28′) long and 210kg (463lbs).
Surely such a large and tough Pokémon should be very, very dense, right? Density is such an important tactile cue for us. Don’t believe me? Pick up a large piece of solid medal. Its surprising weight will make you take it seriously.
Let’s check if Onix would be taken seriously, shall we? Density is equal to mass divided by volume. We use the symbol ρ to represent density, which gives us the following equation:
We already know Onix’s mass. Now we just need to calculate its volume. Luckily Onix is pretty cylindrical, so we can approximate it with a cylinder. The equation for the volume of a cylinder is pretty simple:
Where π is the ratio between the diameter of a circle and its circumference (approximately 3.1415…, no matter what Indiana says), r is the radius of a circle (always one half the diameter), and h is the height of the cylinder.
Given that we know Onix’s height, we just need its diameter. Luckily the Pokémon TV show gives us a sense of scale.
Judging by the image, Onix probably has an average diameter somewhere around a metre (3 feet for the Americans). This means Onix has a radius of 0.5 metres and a height of 8.8 metres. When we put these into our equation, we get:
For a volume of approximately 6.9m3. To get a comparison I turned to Wolfram Alpha which told me that this is about 40% of the volume of a gray whale or a freight container (which incidentally implies that gray whales are about the size of standard freight containers).
Armed with a volume, we can calculate a density.
Okay, so we know that Onix is 30.4 kg/m3, but what does that mean?
Well it’s currently hard to compare. I’m much more used to seeing densities of sturdy materials expressed in tonnes per cubic metre or grams per cubic centimetre than I am seeing them expressed in kilograms per cubic metre. Luckily, it’s easy to convert between these.
There are 1000 kilograms in a ton. If we divide our density by a thousand we can calculate a new density for Onix of 0.0304t/m3.
How does this fit in with common materials, like wood, Styrofoam, water, stone, and metal?
From this chart, you can see that Onix’s density is eerily close to Styrofoam. Even the notoriously light balsa wood is five times denser than him. Actual rock is about 85 times denser. If Onix was made of granite, it would weigh 18 tonnes, much heavier than even Snorlax (the heaviest of the original Pokémon at 460kg).
While most people wouldn’t be able to pick Onix up (it may not be dense, but it is big), it wouldn’t be impossible to drag it. Picking up part of it would feel disconcertingly light, like picking up an aluminum ladder or carbon fibre bike, only more so.
How did the creators of Pokémon accidently bestow one of the most famous of their creations with a hilariously unrealistic density?
I have a pet theory.
I went to school for nanotechnology engineering. One of the most important things we looked into was how equations scaled with size.
Humans are really good at intuiting linear scaling. When something scales linearly, every twofold change in one quantity brings about a twofold change in another. Time and speed scale linearly (albeit inversely). Double your speed and the trip takes half the time. This is so simple that it rarely requires explanation.
Unfortunately for our intuitions, many physical quantities don’t scale linearly. These were the cases that were important for me and my classmates to learn, because until we internalized them, our intuitions were useless on the nanoscale. Many forces, for example, scale such that they become incredibly strong incredibly quickly at small distances. This leads to nanoscale systems exhibiting a stickiness that is hard on our intuitions.
It isn’t just forces that have weird scaling though. Geometry often trips people up too.
In geometry, perimeter is the only quantity I can think of that scales linearly with size. Double the length of the sides of a square and the perimeter doubles. The area, however does not. Area is quadratically related to side length. Double the length of a square and you’ll find the area quadruples. Triple the length and the area increases nine times. Area varies with the square of the length, a property that isn’t just true of squares. The area of a circle is just as tied to the square of its radius as a square is to the square of its length.
Volume is even trickier than radius. It scales with the third power of the size. Double the size of a cube and its volume increases eight-fold. Triple it, and you’ll see 27 times the volume. Volume increases with the cube (which again works for shapes other than cubes) of the length.
If you look at the weights of Pokémon, you’ll see that the ones that are the size of humans have fairly realistic weights. Sandslash is the size of a child (it stands 1m/3′ high) and weighs a fairly reasonable 29.5kg.
(This only works for Pokémon really close to human size. I’d hoped that Snorlax would be about as dense as marshmallows so I could do a fun comparison, but it turns out that marshmallows are four times as dense as Snorlax – despite marshmallows only having a density of ~0.5t/m3)
Beyond these touchstones, you’ll see that the designers of Pokémon increased their weight linearly with size. Onix is a bit more than eight times as long as Sandslash and weighs seven times as much.
Unfortunately for realism, weight is just density times volume and as I just said, volume increases with the cube of length. Onix shouldn’t weigh seven or even eight times as much as Sandslash. At a minimum, its weight should be eight times eight times eight multiples of Sandslash’s; a full 512 times more.
Indeed, the death rate in surgery is almost unique among regulated professions. One person has died in a commercial aviation accident in the US in the last nine years. Structural engineering related accidents killed at most 251 people in the US in 2016  and only approximately 4% of residential structure failures in the US occur due to deficiencies in design.
It isn’t accidental that Canada and America no longer see many plane crashes or structural collapses. Both professions have been rocked by events that made them realize they needed to improve their safety records.
The licensing of professional engineers and the Iron Ring ceremony in Canada for engineering graduates came after two successivebridge collapses killed 88 workers . The aircraft industry was shaken out of its complacency after the Tenerife disaster, where a miscommunication caused two planes to collide on a run-way, killing 583.
As you can see, subsequent safety improvements were both responsive and deliberate.
These aren’t the only events that caused changes. The D. B. Cooper high-jacking led to the first organised airport security in the US. The Therac-25 radiation overdoses led to the first set of guidelines specifically for software that ran on medical devices. The sinking of the Titanic led to a complete overhaul of requirements for lifeboats and radios for oceangoing vessels. The crash of TAA-538 led to the first mandatory cockpit voice recorders.
All of these disasters combine two things that are rarely seen when surgeries go wrong. First, they involved many people. The more people die at once, the more shocking the event and therefore the more likely it is to become widely known. Because most operations involve one or two patients, it is much rarer for problems in them to make the news .
Second, they highlight a specific flaw in the participants, procedures, or systems that fail. Retrospectives could clearly point to a factor and say: “this did it” . It is much harder to do this sort of retrospective on a person and get such a clear answer. It may be true that “blood loss” definitely caused a surgical death, but it’s much harder to tell if that’s the fault of any particular surgeon, or just a natural consequence of poking new holes in a human body. Both explanations feel plausible, so in most cases neither can be wholly accepted.
(I also think there is a third driver here, which is something like “cheapness of death”. I would predict that safety regulation is more common in places where people expect long lives, because death feels more avoidable there. This explains why planes and structures are safer in North America and western Europe, but doesn’t distinguish surgery from other fields in these countries.)
Not every form of engineering or transportation fulfills both of these criteria. Regulation and training have made flying on a commercial flight many, many times safer than riding in a car, while private flights lag behind and show little safety advantage over other forms of transport. When a private plane crashes, few people die. If they’re important (and many people who fly privately are), you might hear about it, but it will quickly fade from the news. These stories don’t have staying power and rarely generate outrage, so there’s never much pressure for improvement.
The best alternative to this model that I can think of is one that focuses on the “danger differential” in a field and predicts that fields with high danger differentials see more and more regulation until the danger differential is largely gone. The danger differential is the difference between how risky a field currently is vs. how risky it could be with near-optimal safety culture. A high danger differential isn’t necessarily correlated with inherent risk in a field, although riskier fields will by their nature have the possibility of larger ones. Here’s three examples:
Commercial air travel in developed countries currently has a very low danger differential. Before a woman was killed by engine debris earlier this year, commercial aviation in the US had gone 9 years without a single fatality.
BASE jumping is almost suicidally dangerous and probably could be made only incredibly dangerous if it had a better safety culture. Unfortunately, the illegal nature of the sport and the fact that experienced jumpers die so often make this hard to achieve and lead to a fairly large danger differential. That said, even with an optimal safety culture, BASE jumping would still see many fatalities and still probably be illegal.
Surgery is fairly dangerous and according to surgeon Atul Gawande, could be much, much safer. Proper adherence to surgical checklists alone could cut adverse events by almost 50%. This means that surgery has a much higher danger differential than air travel.
I think the danger differential model doesn’t hold much water. First, if it were true, we’d expect to see something being done about surgery. Almost a decade after checklists were found to drive such large improvements, there hasn’t been any concerted government action.
Second, this doesn’t match historical accounts of how airlines were regulated into safety. At the dawn of the aviation age, pilots begged for safety standards (which could have reduced crashes a staggering sixtyfold). Instead of stepping in to regulate things, the government dragged its feet. Some of the lifesaving innovations pioneered in those early days only became standard after later and larger crashes – crashes involving hundreds of members of the public, not just pilots.
While this only deals with external regulation, I strongly suspect that fear for the reputation of a profession (which could be driven by these same two factors) affects internal calls for reform as well. Canadian engineers knew that they had to do something after the Quebec bridge collapse created common knowledge that safety standards weren’t good enough. Pilots were put in a similar position with some of the better publicized mishaps. Perhaps surgeons have faced no successful internal campaign for reform so far because the public is not yet aware of the dangers of surgery to the point where it could put surgeon’s livelihoods at risk or hurt them socially.
I wonder if it’s possible to get a profession running scared about their reputation to the point that they improve their safety, even if there aren’t any of the events that seem to drive regulation. Maybe someone like Atul Gawande, who seems determined to make a very big and very public stink about safety in surgery is the answer here. Perhaps having surgery’s terrible safety record plastered throughout the New Yorker will convince surgeons that they need to start doing better .
If not, they’ll continue to get away with murder.
 From the CDC’s truly excellent Cause of Death search function, using codes V81.7 & V82.7 (derailment with no collision), W13 (falling out of building), W23 (caught or crushed between objects), and W35 (explosion of boiler) at home, other, or unknown. I read through several hundred causes of deaths, some alarmingly unlikely, and these were the only ones that seemed relevant. This estimate seems higher than the one surgeon Atul Gawande gave in The Checklist Manifesto, so I’m confident it isn’t too low. ^
 Furthermore, from 1989 to 2000, none of the observed collapses were due to flaws in the engineers’ designs. Instead, they were largely caused by weather, collisions, poor maintenance, and errors during construction. ^
 Claims that the rings are made from the collapsed bridge are false, but difficult to dispel. They’re actually just boring stainless steel, except in Toronto, where they’re still made from iron (but not iron from the bridge). ^
 There may also be an inherent privateness to surgical deaths that keeps them out of the news. Someone dying in surgery, absent obvious malpractice, doesn’t feel like public information in the way that car crashes, plane crashes, and structural failures do. ^
 It is true that it was never discovered why TAA-538 crashed. But black box technology would have given answers had it been in use. That it wasn’t in use was clearly a systems failure, even though the initial failure is indeterminate. This jives with my model, because regulation addressed the clear failure, not the indeterminate one. ^
 This is the ratio between the average miles flown before crash of the (very safe) post office planes and the (very dangerous) privately owned planes. Many in the airline industry wanted the government to mandate the same safety standards on private planes as they mandated on their airmail planes. ^
 I should mention that I have been very lucky to have been in the hands of a number of very competent and professional surgeons over the years. That said, I’m probably going to ask any future surgeon I’m assigned if they follow safety checklists – and ask for someone else to perform the procedure if they don’t. ^
Public goods are non-excludable (so anyone can access them) and non-rival (I can use them as much as I want without limiting the amount you can use them). Broadcast television, national defense, and air are all public goods.
Common-pool resources are non-excludable but rival (if I use them, you will have to make do with less). Iron ore, fish stocks, and grazing land are all common pool resources.
Private goods are excludable (their access is controlled or limited by pricing or other methods) and rival. My clothes, computer, and the parking space I have in my lease but never use are all private goods.
Club goods are excludable but (up to a certain point) non-rival. Think of the swimming pool in an apartment building, a large amusement park, or cellular service.
Club goods are perhaps the most interesting class of goods, because they blend properties of the three better understood classes. They aren’t open to all, but they are shared among many. They can be overwhelmed by congestion, but up until that point, it doesn’t really matter how many people are using them. Think of a gym; as long as there’s at least one free machine of every type, it’s no less convenient than your home.
Club goods offer cost savings over private goods, because you don’t have to buy something that mostly sits unused (again, think of gym equipment). People other than you can use it when it would otherwise sit around and those people can help you pay the cost. It’s for this reason that club goods represent an excellent opportunity for the right entrepreneur to turn a profit.
I currently divide tech start-ups into three classes. There are the Googles of the world, who use network effects or big data to sell advertising more effectively. There are companies like the one I work for that take advantage of modern technology to do things that were never possible before. And then there are those that are slowly and inexorably turning private goods into club goods.
I think this last group of companies (which include Netflix, Spotify, Uber, Lyft, and Airbnb) may be the ones that ultimately have the biggest impact on how we order our lives and what we buy. To better understand how these companies are driving this transformation, let’s go through them one by one, then talk about what it could all mean.
When I was a child, my parents bought a video cassette player, then a DVD player, then a Blu-ray player. We owned a hundred or so video cassettes, mostly whatever movies my brother and I were obsessed with enough to want to own. Later, we found a video rental store we liked and mostly started renting movies. We never owned more than 30 DVDs and 20 Blu-rays.
Then I moved out. I have bought five DVDs since – they came as a set from Kickstarter. Anything else I wanted to watch, I got via Netflix. A few years later, the local video rental store closed down and my parents got an AppleTV and a Netflix of their own.
Buying a physical movie means buying a private good. Video rental stores can be accurately modeled as a type of club good, because even if the movie you want is already rented out, there’s probably one that you want to watch almost as much that is available. This is enough to make them approximately non-rival, while the fact that it isn’t free to rent a movie means that rented videos are definitely excludable.
Netflix represents the next evolution in this business model. As long as the Netflix engineers have done their job right, there’s no amount of watching movies I can do that will prevent you from watching movies. The service is almost truly non-rival.
Movie studios might not feel the effects of Netflix turning a large chunk of the market for movies into one focused on club goods; they’ll still get paid by Netflix. But the switch to Netflix must have been incredibly damaging for the physical media and player manufacturers. When everyone went from cassettes to DVDs or DVDs to Blu-rays, there was still a market for their wares. Now, that market is slowly and inexorably disappearing.
This isn’t just a consequence of technology. The club good business model offers such amazing cost savings that it drove a change in which technology was dominant. When you bought a movie, it would spend almost all of its life sitting on a shelf. Now Netflix acts as your agent, buying movies (or rather, their rights) and distributing such that they’re always being played and almost never sitting on the shelf.
Spotify is very similar to Netflix. Previously, people bought physical cassettes (I’m just old enough that I remember making mix tapes from the radio). Then they switched to CDs. Then it was MP3s bought online (or, almost more likely, pirated online). But even pirating music is falling out of favour these days. Apple, Google, Amazon, and Spotify are all competing to offer unlimited music streaming to customers.
Music differs from movies in that it has a long tradition of being a public good – via broadcast radio. While that hasn’t changed yet (radio is still going strong), I do wonder how much longer the public option for music will exist, especially given the trend away from private cars that I think companies like Uber and Lyft are going to (pardon the pun) drive.
A car you’ve bought is a private good, while Uber and Lyft are clearly club goods. Surge pricing means that there are basically always enough drivers for everyone who wants to go anywhere using the system.
When you buy a car, you’re signing up for it to sit around useless for almost all of its life. This is similar to what happens when you buy exercise equipment, which means the logic behind cars as a club good is just as compelling as the logic behind gyms. Previously, we hadn’t been able to share cars very efficiently because of technological limitations. Dispatching a taxi, especially to an area outside of a city centre, was always spotty, time consuming and confusing. Car-pooling to work was inconvenient.
As anyone who has used a modern ride-sharing app can tell you, inconvenient is no longer an apt descriptor.
There is a floor on how few cars we can get by on. To avoid congestion in a club good, you typically have to provision for peak load. Luckily, peak load (for anything that can sensibly be turned into a club good) always requires fewer resources than would be needed if everyone went out and bought the shared good themselves.
Even “just” substantially decreasing the absolute number of cars out there will be incredibly disruptive to the automotive sector if they don’t correctly predict the changing demand for their products.
It’s also true that increasing the average utilisation of cars could change how our cities look. Parking lots are necessary when cars are a private good, but are much less useful when they become club goods. It is my hope that malls built in the middle of giant parking moats look mighty silly in twenty years.
Airbnb is the most ambiguous example I have here. As originally conceived, it would have driven the exact same club good transformation as the other services listed. People who were on vacation or otherwise out of town would rent out their houses to strangers, increasing the utilisation of housing and reducing the need for dedicated hotels to be built.
Airbnb is sometimes used in this fashion. It’s also used to rent out extra rooms in an otherwise occupied house, which accomplishes almost the same thing.
But some amount of Airbnb usage is clearly taking place in houses or condos that otherwise would have been rental stock. When used in this way, it’s taking advantage of a regulatory grey zone to undercut hotel pricing. Insofar as this might result in a longer-term change towards regulations that are generally cheaper to comply with, this will be good for consumers, but it won’t really be transformational.
The great promise of club goods is that they might lead us to use less physical stuff overall, because where previously each person would buy one of a thing, now only enough units must be purchased to satisfy peak demand. If Airbnb is just shifting around where people are temporary residents, then it won’t be an example of the broader benefits of club goods (even if provides other benefits to its customers).
When Club Goods Eat The Economy
In every case (except potentially Airbnb) above, I’ve outlined how the switch from private goods to club goods is resulting in less consumption. For music and movies, it is unclear if this switch is what is providing the primary benefit. My intuition is that the club good model actually did change consumption patterns for physical copies of movies (because my impression is that few people ever did online video rentals via e.g. iTunes), whereas the MP3 revolution was what really shrunk the footprint of music media.
This switch in consumption patterns and corresponding decrease in the amount of consumption that is necessary to satisfy preferences is being primarily driven by a revolution in logistics and bandwidth. The price of club goods has always compared favourably with that of private goods. The only thing holding people back was inconvenience. Now programmers are steadily figuring out how to make that inconvenience disappear.
On the other hand, increased bandwidth has made it easier to turn any sort of digitizable media into a club good. There’s an old expression among programmers: never underestimate the bandwidth of a station wagon full of cassettes (or CDs, or DVDs, or whatever physical storage media one grew up with) hurtling down the highway. For a long time, the only way to get a 1GB movie to a customer without an appallingly long buffering period was to physically ship it (on a 56kbit/s connection, this movie would take one day and fifteen hours to download, while the aforementioned station wagon with 500 movies would take 118 weeks to download).
Change may start out slow, but I expect to see it accelerate quickly. My generation is the first to have had the internet from a very young age. The generation after us will be the first unable to remember a time before it. We trust apps like Uber and Airbnb much more than our parents, and our younger siblings trust them even more than us.
While it was only kids who trusted the internet, these new club good businesses couldn’t really affect overall economic trends. But as we come of age and start to make major economic decisions, like buying houses and cars, our natural tendency to turn towards the big tech companies and the club goods they peddle will have ripple effects on an economy that may not be prepared for it.
When that happens, there’s only one thing that is certain: there will be yet another deluge of newspaper columns talking about how millennials are destroying everything.
[Warning: Contains spoilers for The Sunset Mantle, Vorkosigan Saga (Memory and subsequent), Dune, and Chronicles of the Kencyrath]
For the uninitiated, Sanderson’s Law (technically, Sanderson’s First Law of Magic) is:
An author’s ability to solve conflict with magic is DIRECTLY PROPORTIONAL to how well the reader understands said magic.
Brandon Sanderson wrote this law to help new writers come up with satisfying magical systems. But I think it’s applicable beyond magic. A recent experience has taught me that it’s especially applicable to fantasy cultures.
Sunset Mantle is what’s called secondary world fantasy; it takes place in a world that doesn’t share a common history or culture (or even necessarily biosphere) with our own. Game of Thrones is secondary world fantasy, while Harry Potter is primary world fantasy (because it takes place in a different version of our world, which we chauvinistically call the “primary” one).
Secondary world fantasy gives writers a lot more freedom to play around with cultures and create interesting set-pieces when cultures collide. If you want to write a book where the Roman Empire fights a total war against the Chinese Empire, you’re going to have to put in a master’s thesis worth of work to explain how that came about (if you don’t want to be eviscerated by pedants on the internet). In a secondary world, you can very easily have a thinly veiled stand-in for Rome right next to a thinly veiled analogue of China. Give readers some familiar sounding names and culture touchstones and they’ll figure out what’s going on right away, without you having to put in effort to make it plausible in our world.
When you don’t use subtle cues, like names or cultural touchstones (for example: imperial exams and eunuchs for China, gladiatorial fights and the cursus honorum for Rome), you risk leaving your readers adrift.
Many of the key plot points in Sunset Mantle hinge on obscure rules in an invented culture/religion that doesn’t bear much resemblance to any that I’m familiar with. It has strong guest rights, like many steppes cultures; it has strong charity obligations and monotheistic strictures, like several historical strands of Christianity; it has a strong caste system and rules of ritual purity, like Hinduism; and it has a strong warrior ethos, complete with battle rage and rules for dealing with it, similar to common depictions of Norse cultures.
These actually fit together surprising well! Reiss pulled off an entertaining book. But I think many of the plot points fell flat because they were almost impossible to anticipate. The lack of any sort of consistent real-world analogue to the invented culture meant that I never really had an intuition of what it would demand in a given situation. This meant that all of the problems in the story that were solved via obscure points of culture weren’t at all satisfying to me. There was build up, but then no excitement during the resolution. This was common enough that several chunks of the story didn’t really work for me.
Here’s one example:
“But what,” asked Lemist, “is a congregation? The Ayarith school teaches that it is ten men, and the ancient school of Baern says seven. But among the Irimin school there is a tradition that even three men, if they are drawn in together into the same act, by the same person, that is a congregation, and a man who has led three men into the same wicked act shall be put to death by the axe, and also his family shall bear the sin.”
All the crowd in the church was silent. Perhaps there were some who did not know against whom this study of law was aimed, but they knew better than to ask questions, when they saw the frozen faces of those who heard what was being said.
(Reiss, Alter S.. Sunset Mantle (pp. 92-93). Tom Doherty Associates. Kindle Edition.)
This means protagonist Cete’s enemy erred greatly by sending three men to kill him and had better cut it out if he doesn’t want to be executed. It’s a cool resolution to a plot point – or would be if it hadn’t taken me utterly by surprise. As it is, it felt kind of like a cheap trick to get the author out of a hole he’d written himself into, like the dreaded deux ex machina – god from the machine – that ancient playwrights used to resolve conflicts they otherwise couldn’t.
(This is the point where I note that it is much harder to write than it is to criticize. This blog post is about something I noticed, not necessarily something I could do better.)
I’ve read other books that do a much better job of using sudden points of culture to resolve conflict in a satisfying manner. Lois McMaster Bujold (I will always be recommending her books) strikes me as particularly apt. When it comes time for a key character of hers to make a lateral career move into a job we’ve never heard of before, it feels satisfying because the job is directly in line with legal principles for the society that she laid out six books earlier.
The job is that of Imperial Auditor – a high powered investigator who reports directly to the emperor and has sweeping powers – and it’s introduced when protagonist Miles loses his combat career in Memory. The principles I think it is based on are articulated in the novella Mountains of Mourning: “the spirit was to be preferred over the letter, truth over technicalities. Precedent was held subordinate to the judgment of the man on the spot”.
Imperial Auditors are given broad discretion to resolve problems as they see fit. The main rule is: make sure the emperor would approve. We later see Miles using the awesome authority of this office to make sure a widow gets the pension she deserves. The letter of the law wasn’t on her side, but the spirit was, and Miles, as the Auditor on the spot, was empowered to make the spirit speak louder than the letter.
Wandering around my bookshelves, I was able to grab a couple more examples of satisfying resolutions to conflicts that hinged on guessable cultural traits:
In Dune, Fremen settle challenges to leadership via combat. Paul Maud’dib spends several years as their de facto leader, while another man, Stilgar, holds the actual title. This situation is considered culturally untenable and Paul is expected to fight Stilgar so that he can lead properly. Paul is able to avoid this unwanted fight to the death (he likes Stilgar) by appealing to the only thing Fremen value more than their leadership traditions: their well-established pragmatism. He says that killing Stilgar before the final battle would be little better than cutting off his own arm right before it. If Frank Herbert hadn’t mentioned the extreme pragmatism of the Fremen (to the point that they render down their dead for water) several times, this might have felt like a cop-out.
In The Chronicles of the Kencyrath, it looks like convoluted politics will force protagonist Jame out of the military academy of Tentir. But it’s mentioned several times that the NCOs who run the place have their own streak of honour that allows them to subvert their traditionally required oaths to their lords. When Jame redeems a stain on the Tentir’s collective honour, this oath to the college gives them an opening to keep her there and keep their oaths to their lords. If PC Hodgell hadn’t spent so long building up the internal culture of Tentir, this might have felt forced.
It’s hard to figure out where good foreshadowing ends and good cultural creation begins, but I do think there is one simple thing an author can do to make culture a satisfying source of plot resolution: make a culture simple enough to stereotype, at least at first.
If the other inhabitants of a fantasy world are telling off-colour jokes about this culture, what do they say? A good example of this done explicitly comes from Mass Effect: “Q: How do you tell when a Turian is out of ammo? A: He switches to the stick up his ass as a backup weapon.”
(Even if you’ve never played Mass Effect, you now know something about Turians.)
At the same time as I started writing this, I started re-reading PC Hodgell’s The Chronicles of the Kencyrath, which provided a handy example of someone doing everything right. The first three things we learn about the eponymous Kencyr are:
They heal very quickly
They dislike their God
Their honour code is strict enough that lying is a deadly crime and calling some a liar a deathly insult
There are eight more books in which we learn all about the subtleties of their culture and religion. But within the first thirty pages, we have enough information that we can start making predictions about how they’ll react to things and what’s culturally important.
When Marc, a solidly dependable Kencyr who is working as a guard and bound by Kencyr cultural laws to loyally serve his employer lets the rather more eccentric Jame escape from a crime scene, we instantly know that him choosing her over his word is a big deal. And indeed, while he helps her escape, he also immediately tries to kill himself. Jame is only able to talk him out of it by explaining that she hadn’t broken any laws there. It was already established that in the city of Tai-Tastigon, only those who physically touch stolen property are in legal jeopardy. Jame never touched the stolen goods, she was just on the scene. Marc didn’t actually break his oath and so decides to keep living.
God Stalk is not a long book, so that fact that PC Hodgell was able to set all of this up and have it feel both exciting in the moment and satisfying in the resolution is quite remarkable. It’s a testament to what effective cultural distillation, plus a few choice tidbits of extra information can do for a plot.
If you don’t come up with a similar distillation and convey it to your readers quickly, there will be a period where you can’t use culture as a satisfying source of plot resolution. It’s probably no coincidence that I noticed this in Sunset Mantle, which is a long(-ish) novella. Unlike Hodgell, Reiss isn’t able to develop a culture in such a limited space, perhaps because his culture has fewer obvious touchstones.
Sanderson’s Second Law of Magic can be your friend here too. As he stated it, the law is:
The limitations of a magic system are more interesting than its capabilities. What the magic can’t do is more interesting than what it can.
Similarly, the taboos and strictures of a culture are much more interesting than what it permits. Had Reiss built up a quick sketch of complicated rules around commanding and preaching (with maybe a reference that there could be surprisingly little theological difference between military command and being behind a pulpit), the rule about leading a congregation astray would have fit neatly into place with what else we knew of the culture.
Having tight constraints imposed by culture doesn’t just allow for plot resolution. It also allows for plot generation. In The Warrior’s Apprentice, Miles gets caught up in a seemingly unwinnable conflict because he gave his word; several hundred pages earlier Bujold establishes that breaking a word is, to a Barrayaran, roughly equivalent to sundering your soul.
It is perhaps no accident that the only thing we learn initially about the Kencyr that isn’t a descriptive fact (like their healing and their fraught theological state) is that honour binds them and can break them. This constraint, that all Kencyr characters must be honourable, does a lot of work driving the plot.
This then would be my advice: when you wish to invent a fantasy culture, start simple, with a few stereotypes that everyone else in the world can be expected to know. Make sure at least one of them is an interesting constraint on behaviour. Then add in depth that people can get to know gradually. When you’re using the culture as a plot device, make sure to stick to the simple stereotypes or whatever other information you’ve directly given your reader. If you do this, you’ll develop rich cultures that drive interesting conflicts and you’ll be able to use cultural rules to consistently resolve conflict in a way that will feel satisfying to your readers.
[Epistemic Status: Written more harshly than my actual views for persuasive effect. I should also point out that all views expressed here are my own, not my employer’s; when I’m hiring, my first commitment is complying with the relevant Federal, Provincial, and local legislation. My second commitment is to finding the best people. Ideology doesn’t come into it. Serendipitously, I think everything I’ve argued for here helps me discharge both duties.]
In my capacity as a senior employee at Alert Labs (it’s easy to be senior when the company is only three years old), I do a lot of hiring. Since I started, I’ve been involved in interviews for four full time hires and five interns. Throughout all of this, I’ve learned a lot about what to look for in a resume.
I’ve also gotten in the occasional disagreement about what we should look for in in people we’re (potentially) hiring.
There’s a curious double vision in the profession about programming projects. We all tell ourselves people do them only for fun. Yet we also look for them on resumes.
The second fact means that the first cannot always be true. My projects partially exist for my resume. I’ve enjoyed working on them. But if there wasn’t a strong financial motive to have worked on them, I probably wouldn’t have. Or I’d have done them differently.
As a someone who hires, I can’t claim that programming projects aren’t useful. They give, perhaps better than anything else (e.g. the much-derided whiteboard interview), an idea of what sort of code someone would write as an employee. I’ve called people – especially people without any formal education in CS – in for interviews largely on the strengths of their personal projects. Seeing that someone can use the languages that they say they can, that they can write unit tests and documentation, and that they can lay out a large project makes me have more faith that they can do the job.
When programming projects are used as a complement to employment and educational history, I think they help the field.
But I’ve also argued stringently against treating personal projects as a key part of any hiring process. While I like using them as a supplement, I think there are four good reasons not to rely on them as any sort of primary criteria.
First, not everyone has time for projects. Using them as a screen sifts out people with caregiving responsibilities, with families, or with strong commitments in their personal life. When you’re only hiring from people without other commitments, it becomes easier for a team to slip into a workaholic lifestyle. This is bad, because despite what many people think, studies consistently show no productivity benefits from working more than 40 hours per week for prolonged periods. All long hours do is deprive people of personal time.
(In a world where people with caregiving responsibilities are more likely to be female, overreliance on personal projects can also become a subtle form of hiring discrimination.)
I’m incredibly grateful that I work at a company founded by people with both management experience and children. Their management experience means they know better than to let their employees burn out from overwork, while their children mean that the company has always had a culture of taking time for other commitments. This doesn’t mean that I’m never in for sixty hours in a week, or that I never have to deal with a server failure at midnight. Work-life balance doesn’t mean that I don’t take my work seriously; it just means I don’t conflate being in the office for 12 hours at a time with that seriousness.
Second, requiring people to have programming hobbies sifts out a lot of interesting people. I understand that there exist people that only want to live code, only want to talk about code, and want to be surrounded by people who are also in that mode, but that isn’t me. I joined Alert Labs because I wanted to solve real-life problems, not make tools for people just like me. Having a well-rounded team means that people spontaneously generate ideas for new projects. It means they take ownership for features (like ensuring everything on our website follows accessibility guidelines) that would never percolate to the top of my mind. It makes our team stronger and more effective.
Outside of a few other oddball professions (lawyers, I’m looking at you), no one else is expected to treat their work as their hobby. People can make their hobbies into their work (look at webcomic artists or bloggers who make it big) and this was one of the initial purposes of personal programming projects. It’s not at all unusual to find something you like enough that you’d make a full-time job of it if you could. But then you normally get new hobbies.
People who fall in love with programming are lucky in that they often can turn it into a full-time job. Writers… are somewhat less lucky. I haven’t monetized my blog because I’d find the near-impossibility of making money off of discursive posts about political economy disheartening. Keeping my blog as a vanity project keeps it fun.
But we programmers shouldn’t let our economic fortune turn what has always been the path that a minority of people take into our field into a bona fide requirement.
Third, I dislike what an overemphasis on programming projects can do to resumes. I frequently see interesting hobbies shunted aside to make room for less-than-inspired programming projects. I’ve seen people who got the memo that they needed a profile full of projects, but not the memo that it had to be their projects. This leads to GitHub pages full of forks of well-known projects. I don’t know who this is supposed to fool, but it sure doesn’t work on me.
When students send in resumes, they all put the same four class projects on them, in the somewhat futile hope that we won’t notice and we’ll consider them adequately dedicated. I wish the fact that they were paying $8500 per term to learn about CS could be taken as proof enough of their dedication and I wouldn’t have to read about pong sixty times a semester, but that is apparently not the world I live in.
My final beef with an overemphasis on programming hobbies is that many important skills can’t be learned in front of a computer. Not all hobbies teach you how to work together with a disparate team, respectfully navigate disagreements with other people, and effectively address co-worker concerns, but those that do are worth their weight in gold. Software is becoming ever more complex and is having ever more capital thrown at it. We’ve exhausted what we can do with single brilliant loners, which means that we now need to turn to functional teams.
This isn’t meant to conjure up negative and insulting stereotypes about people who spend all their spare time programming. Many of these people are incredibly kind and very devoted to mentoring new members of our community.
I don’t want people who program in their spare time and love it with all their hearts to be tarred with negative stereotypes. But I also don’t want people with other interests to be considered uncommitted dilettantes. And I hope we can build a profession that believes neither myth.
Last week I explained how poor decisions by central bankers (specifically failing to spur inflation) can make recessions much worse and lead to slower wage growth during recovery.
(Briefly: inflation during recessions reduces the real cost of payroll, cutting business expenses and making firing people unnecessary. During a recovery, it makes hiring new workers cheaper and so leads to more being hired. Because central bankers failed to create inflation during and after the great recession, many businesses are scared of raising salaries. They believe (correctly) that this will increase their payroll expenses to the point where they’ll have to lay many people off if another recession strikes. Until memories of the last recession fade or central bankers clean up their act, we shouldn’t expect wages to rise.)
Now I’d like to expand on an offhand comment I made about the minimum wage last week and explore how it can affect recovery, especially if it’s indexed to inflation.
The minimum wage represents a special case when it comes to pay cuts and layoffs in recessions. While it’s always theoretically possible to convince people to take a pay cut rather than a layoff (although in practice it’s mostly impossible), this option isn’t available for people who make the minimum wage. It’s illegal to pay them anything less. If bad times strike and business is imperiled, people making the minimum wage might have to be laid off.
I say “might”, because when central bankers aren’t proving useless, inflation can rescue people making the minimum wage from being let go. Inflation makes the minimum wage relatively less valuable, which reduces the cost of payroll relative to other inputs and helps to save jobs that pay minimum wage. This should sound familiar, because inflation helps people making the minimum wage in the exact same way it helps everyone else.
Because of increasingly expensive housing and persistently slow wage growth, some jurisdictions are experimenting with indexing the minimum wage to inflation. This means that the minimum wage rises at the same rate as the cost of living. Most notably (to me, at least), this group includes my home province of Ontario.
When the minimum wage is tied to inflation, recessions can become especially dangerous and drawn out.
With the minimum wage rising in lockstep with inflation, any attempts to decrease payroll costs in real terms (that is to say: inflation adjusted terms) is futile to the extent that payroll expenses are for minimum wage workers. Worse, people who were previously making above the minimum wage and might have had their jobs saved by inflation can be swept up by an increasingly high minimum wage.
This puts central bankers in a bind. As soon as the minimum wage is indexed to inflation, inflation is no longer a boon to all workers. Suddenly, many workers can find themselves in a “damned if you do, damned if you don’t” situation. Without inflation, they may be too expensive to keep. With it, they may be saved… until the minimum wage comes for them too. If a recession goes on long enough, only high-income workers would be sparred.
In addition, minimum wage (or near-minimum wage) workers who are laid off during a period of higher inflation (an in this scenario, there will be many) will suffer comparatively more, as their savings get exhausted even more quickly.
Navigating these competing needs would be an especially tough challenge for certain central banks like the US Federal Reserve – those banks that have dual mandates to maintain stable prices and full employment. If a significant portion of the US ever indexes its minimum wage to inflation, the Fed will have no good options.
It is perhaps darkly humorous that central banks, which bear an unusually large parcel of the blame for our current slow wage growth, stand to face the greatest challenges from the policies we’re devising to make up for their past shortcomings. Unfortunately, I think a punishment of this sort is rather like cutting off our collective nose to spite our collective face.
There are simple policies we could enact to counter the risks here. Suspending any peg to inflation during years that contain recessions (in Ontario at least, the minimum wage increase due to inflation is calculated annually) would be a promising start. Wage growth after a recession could be ensured with a rebound clause, or better yet, the central bank actually doing its job properly.
I am worried about the political chances (and popularity once enacted) of any such pragmatic policy though. Many people respond to recessions with the belief that the government can make things better by passing the right legislation – forcing the economy back on track by sheer force of ink. This is rarely the case, especially because the legislation that people have historically clamoured for when unemployment is high is the sort that increases wages, not lowers them. This is a disaster when unemployment threatens because of too-high wages. FDR is remembered positively for his policy of increasing wages during the great depression, even though this disastrous decision strangled the recovery in its crib. I don’t expect any higher degree of economic literacy from people today.
To put my fears more plainly, I worry that politicians, faced with waning popularity and a nipping recession, would find allowing the minimum wage to be frozen too much of a political risk. I frankly don’t trust most politicians to follow through with a freeze, even if it’s direly needed.
Minimum wages are one example of a tradeoff we make between broad access and minimum standards. Do we try and make sure everyone who wants a job can have one, or do we make sure people who have jobs aren’t paid too little for their labour, even if that hurts the unemployed? As long as there’s scarcity, we’re going to have to struggle with how we ensure that as many people as possible have their material needs met and that involves tradeoffs like this one.
But when we’re making these kind of compassionate decisions, we need to look at the risks of whatever systems we choose. Proponents of indexing the minimum wage to inflation haven’t done a good job of understanding the grave risk it poses to the health of our economy and perhaps most of all, to the very people they seek to help. In places like Ontario, where the minimum wage is already indexed to inflation, we’re going to pay for their lack of foresight next time an economic disaster strikes.
When you’re noticing that you’re talking past someone, what does it look like? Do you feel like they’re ignoring all the implications of the topic at hand (“yes, I know the invasion of Iraq is causing a lot of pain, but I think the important question is, ‘did they have WMDs?'”)? Or do you feel like they’re avoiding talking about the object-level point in favour of other considerations (“factory farmed animals might suffer, but before we can consider whether that’s justified or not, shouldn’t we decide whether we have any obligation to maximize the number of living creatures?”)?
I’m beginning to suspect that many tense disagreements and confused, fruitless conversations are caused by differences in how people conceive of and process the truth. More, I think I have a model that explains why some people can productively disagree with anyone and everyone, while others get frustrated very easily with even their closest friends.
The basics of this model come from a piece that Jacob Falkovich wrote for Quillette. He uses two categories, “contextualizers” and “decouplers”, to analyze an incredibly unproductive debate (about race and IQ) between Vox’s Ezra Klein and Dr. Sam Harris.
Klein is the contextualizer, a worldview that comes naturally to a political journalist. Contextualizers see ideas as embedded in a context. Questions of “who does this effect?”, “how is this rooted in society?”, and “what are the (group) identities of people pushing this idea?” are the bread and butter of contextualizers. One of the first things Klein says in his debate with Harris is:
Here is my view: I think you have a deep empathy for Charles Murray’s side of this conversation, because you see yourself in it [because you also feel attacked by “politically correct” criticism]. I don’t think you have as deep an empathy for the other side of this conversation. For the people being told once again that they are genetically and environmentally and at any rate immutably less intelligent and that our social policy should reflect that. I think part of the absence of that empathy is it doesn’t threaten you. I don’t think you see a threat to you in that, in the way you see a threat to you in what’s happened to Murray. In some cases, I’m not even quite sure you heard what Murray was saying on social policy either in The Bell Curve and a lot of his later work, or on the podcast. I think that led to a blind spot, and this is worth discussing.
Klein is highlighting what he thinks is the context that probably informs Harris’s views. He’s suggesting that Harris believes Charles Murray’s points about race and IQ because they have a common enemy. He’s aware of the human tendency to like ideas that come from people we feel close to (myside bias) – or that put a stick in the eye of people we don’t like.
There are other characteristics of contextualizers. They often think thought experiments are pointless, given that they try and strip away all the complex ways that society affects our morality and our circumstances. When they make mistakes, it is often because they fall victim to the “ought-is” fallacy; they assume that truths with bad outcomes are not truths at all.
Harris, on the other hand, is a decoupler. Decoupling involves separating ideas from context, from personal experience, from consequences, from anything but questions of truth or falsehood and using this skill to consider them in the abstract. Decoupling is necessary for science because it’s impossible to accurately check a theory when you hope it to be true. Harris’s response to Klein’s opening salvo is:
I think your argument is, even where it pretends to be factual, or wherever you think it is factual, it is highly biased by political considerations. These are political considerations that I share. The fact that you think I don’t have empathy for people who suffer just the starkest inequalities of wealth and politics and luck is just, it’s telling and it’s untrue. I think it’s even untrue of Murray. The fact that you’re conflating the social policies he endorses — like the fact that he’s against affirmative action and he’s for universal basic income, I know you don’t happen agree with those policies, you think that would be disastrous — there’s a good-faith argument to be had on both sides of that conversation. That conversation is quite distinct from the science and even that conversation about social policy can be had without any allegation that a person is racist, or that a person lacks empathy for people who are at the bottom of society. That’s one distinction I want to make.
Harris is pointing out that questions of whether his beliefs will have good or bad consequences or who they’ll hurt have nothing to do with the question of if they are true. He might care deeply about the answers of those questions, but he believes that it’s a dangerous mistake to let that guide how you evaluate an idea. Scientists who fail to do that tend to get caught upin thereplication crisis.
When decouplers err, it is often because of the is-ought fallacy. They fail to consider how empirical truths can have real world consequences and fail to consider how labels that might be true in the aggregate can hurt individuals.
When you’re arguing with someone who doesn’t contextualize as much as you do, it can feel like arguing about useless hypotheticals. I once had someone start a point about police shootings and gun violence with “well, ignoring all of society…”. This prompted immediate groans.
When arguing with someone who doesn’t decouple as much as you do, it can feel useless and mushy. A co-worker once said to me “we shouldn’t even try and know the truth there – because it might lead people to act badly”. I bit my tongue, but internally I wondered how, absent the truth, we can ground disagreements in anything other than naked power.
Throughout the debate between Harris and Klein, both of them get frustrated at the other for failing to think like they do – which is why it provided such a clear example for Falkovich. If you read the transcripts, you’ll see a clear pattern: Klein ignores questions of truth or falsehood and Harris ignores questions of right and wrong. Neither one is willing to give an inch here, so there’s no real engagement between them.
This doesn’t have to be the case whenever people who prefer context or prefer to deal with the direct substance of an issue interact.
My theory is that everyone has a window that stretches from the minimum amount of context they like in conversations to the minimum amount of substance. Theoretically, this window could stretch from 100% context and no substance to 100% substance and no context.
But practically no one has tastes that broad. Most people accept a narrower range of arguments. Here’s what three well compatible friends might look like:
We should expect to see some correlation between the minimum and maximum amount of context people want to get. Windows may vary in size, but in general, feeling put-off by lots of decoupling should correlate with enjoying context.
Klein and Harris disagreed so unproductively not just because they give first billing to different things, but because their world views are different enough that there is absolutely no overlap between how they think and talk about things.
I’ve found thinking about windows of context and substance, rather than just the dichotomous categories, very useful for analyzing how me and my friends tend to agree and disagree.
Some people I know can hold very controversial views without ever being disagreeable. They are good at picking up on which sorts of arguments will work with their interlocutors and sticking to those. These people are no doubt aided by rather wide context windows. They can productively think and argue with varying amounts of context and substance.
Other people feel incredibly difficult to argue with. These are the people who are very picky about what arguments they’ll entertain. If I sort someone into this internal category, it’s because I’ve found that one day they’ll dismiss what I say as too nitty-gritty, while the next day they criticize me for not being focused enough on the issue at hand.
What I’ve started to realize is that people I find particularly finicky to argue with may just have a fairly narrow strike zone. For them, it’s simultaneously easy for arguments to feel devoid of substance or devoid of context.
I think one way that you can make arguments with friends more productive is explicitly lay out the window in which you like to be convinced. Sentences like: “I understand what you just said might convince many people, but I find arguments about the effects of beliefs intensely unsatisfying” or “I understand that you’re focused on what studies say, but I think it’s important to talk about the process of knowledge creation and I’m very unlikely to believe something without first analyzing what power hierarchies created it” are the guideposts by which you can show people your context window.