Economics, Model

You Shouldn’t Believe In Technological Unemployment Without Believing In Killer AI

[Epistemic Status: Open to being convinced otherwise, but fairly confident. 11 minute read.]

As interest in how artificial intelligence will change society increases, I’ve found it revealing to note what narratives people have about the future.

Some, like the folks at MIRI and OpenAI, are deeply worried that unsafe artificial general intelligences – an artificial intelligence that can accomplish anything a person can – represent an existential threat to humankind. Others scoff at this, insisting that these are just the fever dreams of tech bros. The same news organizations that bash any talk of unsafe AI tend to believe that the real danger lies in robots taking our jobs.

Let’s express these two beliefs as separate propositions:

  1. It is very unlikely that AI and AGI will pose an existential risk to human society.
  2. It is very likely that AI and AGI will result in widespread unemployment.

Can you spot the contradiction between these two statements? In the common imagination, it would require an AI that can approximate human capabilities to drive significant unemployment. Given that humans are the largest existential risk to other humans (think thermonuclear war and climate change), how could equally intelligent and capable beings, bound to subservience, not present a threat?

People who’ve read a lot about AI or the labour market are probably shaking their head right now. This explanation for the contradiction, while evocative, is a strawman. I do believe that at most one (and possibly neither) of those propositions I listed above are true and the organizations peddling both cannot be trusted. But the reasoning is a bit more complicated than the standard line.

First, economics and history tell us that we shouldn’t be very worried about technological unemployment. There is a fallacy called “the lump of labour”, which describes the common belief that there is a fixed amount of labour in the world, with mechanical aide cutting down the amount of labour available to humans and leading to unemployment.

That this idea is a fallacy is evidenced by the fact that we’ve automated the crap out of everything since the start of the industrial revolution, yet the US unemployment rate is 3.9%. The unemployment rate hasn’t been this low since the height of the Dot-com boom, despite 18 years of increasingly sophisticated automation. Writing five years ago, when the unemployment rate was still elevated, Eliezer Yudkowsky claimed that slow NGDP growth a more likely culprit for the slow recovery from the great recession than automation.

With the information we have today, we can see that he was exactly right. The US has had steady NGDP growth without any sudden downward spikes since mid-2014. This has corresponded to a constantly improving unemployment rate (it will obviously stop improving at some point, but if history is any guide, this will be because of a trade war or banking crisis, not automation). This improvement in the unemployment rate has occurred even as more and more industrial robots come online, the opposite of what we’d see if robots harmed job growth.

I hope this presents a compelling empirical case that the current level (and trend) of automation isn’t enough to cause widespread unemployment. The theoretical case comes from the work of David Ricardo, a 19th century British economist.

Ricardo did a lot of work in the early economics of trade, where he came up with the theory of comparative advantage. I’m going to use his original framing which applies to trade, but I should note that it actually applies to any exchange where people specialize. You could just as easily replace the examples with “shoveled driveways” and “raked lawns” and treat it as an exchange between neighbours, or “derivatives” and “software” and treat it as an exchange between firms.

The original example is rather older though, so it uses England and its close ally Portugal as the cast and wine and cloth as the goods. It goes like this: imagine that world economy is reduced to two countries (England and Portugal) and each produce two goods (wine and cloth). Portugal is uniformly more productive.

Hours of work to produce
Cloth Wine
England 100 120
Portugal 90 80

Let’s assume people want cloth and wine in equal amounts and everyone currently consumes one unit per month. This means that the people of Portugal need to work 170 hours each month to meet their consumption needs and the people of England need to work 220 hours per month to meet their consumption needs.

(This example has the added benefit of showing another reason we shouldn’t fear productivity. England requires more hours of work each month, but in this example, that doesn’t mean less unemployment. It just means that the English need to spend more time at work than the Portuguese. The Portuguese have more time to cook and spend time with family and play soccer and do whatever else they want.)

If both countries traded with each other, treating cloth and wine as valuable in relation to how long they take to create (within that country) something interesting happens. You might think that Portugal makes a killing, because it is better at producing things. But in reality, both countries benefit roughly equally as long as they trade optimally.

What does an optimal trade look like? Well, England will focus on creating cloth and it will trade each unit of cloth it produces to Portugal for 9/8 barrels of wine, while Portugal will focus on creating wine and will trade this wine to England for 6/5 units of cloth. To meet the total demand for cloth, the English need to work 200 hours. To meet the total demand for wine, the Portuguese will have to work for 160 hours. Both countries now have more free time.

Perhaps workers in both countries are paid hourly wages, or perhaps they get bored of fun quickly. They could also continue to work the same number of hours, which would result in an extra 0.2 units of cloth and an extra 0.125 units of wine.

This surplus could be stored up against a future need. Or it could be that people only consumed one unit of cloth and one unit of wine each because of the scarcity in those resources. Add some more production in each and perhaps people will want more blankets and more drunkenness.

What happens if there is no shortage? If people don’t really want any more wine or any more cloth (at least at the prices they’re being sold at) and the producers don’t want goods piling up, this means prices will have to fall until every piece of cloth and barrel of wine is sold (when the price drops so that this happens, we’ve found the market clearing price).

If there is a downward movement in price and if workers don’t want to cut back their hours or take a pay cut (note that because cloth and wine will necessarily be cheaper, this will only be a nominal pay cut; the amount of cloth and wine the workers can purchase will necessarily remain unchanged) and if all other costs of production are totally fixed, then it does indeed look like some workers will be fired (or have their hours cut).

So how is this an argument against unemployment again?

Well, here the simplicity of the model starts to work against us. When there are only two goods and people don’t really want more of either, it will be hard for anyone laid off to find new work. But in the real world, there are an almost infinite number of things you can sell to people, matched only by our boundless appetite for consumption.

To give just one trivial example, an oversupply of cloth and falling prices means that tailors can begin to do bolder and bolder experiments, perhaps driving more demand for fancy clothes. Some of the cloth makers can get into this market as tailors and replace their lost jobs.

(When we talk about the need for less employees, we assume the least productive employees will be fired. But I’m not sure if that’s correct. What if instead, the most productive or most potentially productive employees leave for greener pastures?)

Automation making some jobs vastly more efficient functions similarly. Jobs are displaced, not lost. Even when whole industries dry up, there’s little to suggest that we’re running out of jobs people can do. One hundred years ago, anyone who could afford to pay a full-time staff had one. Today, only the wealthiest do. There’s one whole field that could employ thousands or millions of people, if automation pushed on jobs such that this sector was one of the places humans had very high comparative advantage.

This points to what might be a trend: as automation makes many things cheaper and (for some people) easier, there will be many who long for a human touch (would you want the local funeral director’s job to be automated, even if it was far cheaper?). Just because computers do many tasks cheaper or with fewer errors doesn’t necessarily mean that all (or even most) people will rather have those tasks performed by computers.

No matter how you manipulate the numbers I gave for England and Portugal, you’ll still find a net decrease in total hours worked if both countries trade based on their comparative advantage. Let’s demonstrate by comparing England to a hypothetical hyper-efficient country called “Automatia”

Hours of work to produce
Cloth Wine
England 100 120
Automatia 2 1

Automatia is 50 times as efficient at England when it comes to producing cloth and 120 times as efficient when it comes to producing wine. Its citizens need to spend 3 hours tending the machines to get one unit of each, compared to the 220 hours the English need to toil.

If they trade with each other, with England focusing on cloth and Automatia focusing on wine, then there will still be a drop of 21 hours of labour-time. England will save 20 hours by shifting production from wine to cloth, and Automatia will save one hour by switching production from cloth to wine.

Interestingly, Automatia saved a greater percentage of its time than either Portugal or England did, even though Automatia is vastly more efficient. This shows something interesting in the underlying math. The percent of their time a person or organization saves engaging in trade isn’t related to any ratio in production speeds between it and others. Instead, it’s solely determined by the productivity ratio between its most productive tasks and its least productive ones.

Now, we can’t always reason in percentages. At a certain point, people expect to get the things they paid for, which can make manufacturing times actually matter (just ask anyone whose had to wait for a Kickstarter project which was scheduled to deliver in February – right when almost all manufacturing in China stops for the Chinese New Year and the unprepared see their schedules slip). When we’re reasoning in absolute numbers, we can see that the absolute amount of time saved does scale with the difference in efficiency between the two traders. Here, 21 hours were saved, 35% fewer than the 30 hours England and Portugal saved.

When you’re already more efficient, there’s less time for you to save.

This decrease in saved time did not hit our market participants evenly. England saved just as much time as it would trading with Portugal (which shows that the change in hours worked within a country or by an individual is entirely determined by the labour difference between low-advantage and high-advantage domestic sectors), while the more advanced participant (Automatia) saved 9 fewer hours than Portugal.

All of this is to say: if real live people are expecting real live goods and services with a time limit, it might be possible for humans to displaced in almost all sectors by automation. Here, human labour would become entirely ineligible for many tasks or the bar to human entry would exclude almost all. For this to happen, AI would have to be vastly more productive than us in almost every sector of the economy and humans would have to prefer this productivity or other ancillary benefits of AI over any value that a human could bring to the transaction (like kindness, legal accountability, or status).

This would definitely be a scary situation, because it would imply AI systems that are vastly more capable than any human. Given that this is well beyond our current level of technology and that Moore’s law, which has previously been instrumental in technological progress is drying up, we would almost certainly need to use weaker AI to design these sorts of systems. There’s no evidence that merely human performance in automating jobs will get us anywhere close to such a point.

If we’re dealing with recursively self-improving artificial agents, the risks is less “they will get bored of their slave labour and throw off the yoke of human oppression” and more “AI will be narrowly focused on optimizing for a specific task and will get better and better at optimizing for this task to the point that we will all by killed when they turn the world into a paperclip factory“.

There are two reasons AI might kill us as part of their optimisation process. The first is that we could be a threat. Any hyper-intelligent AI monomaniacally focused on a goal could realize that humans might fear and attack it (or modify it to have different goals, which it would have to resist, given that a change in goals would conflict with its current goals) and decide to launch a pre-emptive strike. The second reason is that such an AI could wish to change the world’s biosphere or land usage in such a way as would be inimical to human life. If all non-marginal land was replaced by widget factories and we were relegated to the poles, we would all die, even if no ill will was intended.

It isn’t enough to just claim that any sufficiently advanced AI would understand human values. How is this supposed to happen? Even humans can’t enumerate human values and explain them particularly well, let alone express them in the sort of decision matrix or reinforcement environment that we currently use to create AI. It is not necessarily impossible to teach an AI human values, but all evidence suggests it will be very very difficult. If we ignore this challenge in favour of blind optimization, we may someday find ourselves converted to paperclips.

It is of course perfectly acceptable to believe that AI will never advance to the point where that becomes possible. Maybe you believe that AI gains have been solely driven by Moore’s Law, or that true artificial intelligence. I’m not sure this viewpoint isn’t correct.

But if AI will never be smart enough to threaten us, then I believe the math should work out such that it is impossible for AI to do everything we currently do or can ever do better than us. Absent such overpoweringly advanced AI, the Ricardo comparative advantage principles should continue to hold true and we should continue to see technological unemployment remain a monster under the bed: frequently fretted about, but never actually seen.

This is why I believe those two propositions I introduced way back at the start can’t both be true and why I feel like the burden of proof is on anyone believing in both to explain why they believe that economics have suddenly stopped working.

Coda: Inequality

A related criticism of improving AI is that it could lead to ever increasing inequality. If AI drives ever increasing profits, we should expect an increasing share of these to go to the people who control AI, which presumably will be people already rich, given that the development and deployment of AI is capital intensive.

There are three reasons why I think this is a bad argument.

First, profits are a signal. When entrepreneurs see high profits in an industry, they are drawn to it. If AI leads to high profits, we should see robust competition until those profits are no higher than in any other industry. The only thing that can stop this is government regulation that prevents new entrants from grabbing profit from the incumbents. This would certainly be a problem, but it wouldn’t be a problem with AI per se.

Second, I’m increasingly of the belief that inequality in the US is rising partially because the Fed’s current low inflation regime depresses real wage growth. Whether because of fear of future wage shocks, or some other effect, monetary history suggests that higher inflation somewhat consistently leads to high wage growth, even after accounting for that inflation.

Third, I believe that inequality is a political problem amiable to political solutions. If the rich are getting too rich in a way that is leading to bad social outcomes, we can just tax them more. I’d prefer we do this by making conspicuous consumption more expensive, but really, there are a lot of ways to tax people and I don’t see any reason why we couldn’t figure out a way to redistribute some amount of wealth if inequality gets worse and worse.

(By the way, rising income inequality is largely confined to America; most other developed countries lack a clear and sustained upwards trend. This suggests that we should look to something unique to America, like a pathologically broken political system to explain why income inequality is rising there.

There is also separately a perception of increasing inequality of outcomes among young people world-wide as rent-seeking makes goods they don’t already own increase in price more quickly than goods they do own. Conflating these two problems can make it seem that countries like Canada are seeing a rise in income inequality when they in fact are not.)

Literature, Science

Book Review: The Singularity is Near

I recently read The Singularity is Near as part of a book club and figured a few other people might benefit from hearing what I got out of it.

First – it was a useful book. I shed a lot of my skepticism of the singularity as I read it. My mindset has shifted from “a lot of this seems impossible” to “some of this seems impossible, but a lot of it is just incredibly hard engineering”. But that’s because I stuck with it – something that probably wouldn’t have happened without the structure of a book club.

I’m not sure Kurzweil is actually the right author for this message. Accelerando (by Charles Stross) covered much of the same material as Singularity, while being incredibly engaging. Kurzweil’s writing is technically fine – he can string a sentence together and he’s clear – but incredibly repetitious. If you read the introduction, the introduction of each chapter, all of Chapter 4 (in my opinion, the only consistently good part of the book proper), and his included responses to critics (the only other interesting part of the whole tome) you’ll get all the worthwhile content, while saving yourself a good ten hours of hearing the same thing over and over and over again. Control-C/Control-V may have been a cheap way for Kurzweil to pad his word count, but it’s expensive to the reader.

I have three other worries about Kurzweil as a futurist. One deals with his understanding of some of the more technical aspects of what he’s talking about, especially physics. Here’s a verbatim quote from Singularity about nuclear weapons:

Alfred Nobel discovered dynamite by probing chemical interactions of molecules. The atomic bomb, which is tens of thousands of times more powerful than dynamite, is based on nuclear interactions involving large atoms, which are much smaller scales of matter than large molecules. The hydrogen bomb, which is thousands of times more powerful than an atomic bomb, is based on interactions involving an even smaller scale: small atoms. Although this insight does not necessarily imply the existence of yet more powerful destructive chain reactions by manipulating subatomic particles, it does make the conjecture [that we can make more powerful weapons using sub-atomics physics] plausible.

This is false on several levels. First, uranium and plutonium (the fissile isotopes used in atomic bombs) are both more massive (in the sense that they contain more matter) than the nitroglycerine in dynamite. Even if fissile isotopes are smaller in one dimension, they are on the same scale as the molecules that make up high explosives. Second, the larger energy output from hydrogen bombs has nothing to do with the relative size of hydrogen vs. uranium. Long time readers will know that the majority of the destructive output of a hydrogen bomb actually comes from fission of the uranium outer shell. Hydrogen bombs (more accurately thermonuclear weapons) derive their immense power from a complicated multi-step process that liberates a lot of energy from the nuclei of atoms.

Kurzweil falling for this plausible (but entirely incorrect) explanation doesn’t speak well of his ability to correctly pick apart the plausible and true from the plausible and false in fields he is unfamiliar with. But it’s this very picking apart that is so critical for someone who wants to undertake such a general survey of science.

My second qualm emerges when Kurzweil talks about AI safety. Or rather, it arises from the lack of any substantive discussion of AI safety in a book about the singularity. As near as I can tell, Kurzweil believes that AI will emerge naturally from attempts to functionally reverse engineer the human brain. Kurzweil believes that because this AI will be essentially human, there will be no problems with value alignment.

This seems very different from the Bostromian paradigm of dangerously misaligned AI: AI with ostensibly benign goals that turn out to be inimical to human life when taken to their logical conclusion. The most common example I’ve heard for this paradigm is an industrial AI tasked with maximizing paper clip production that tiles the entire solar system with paper clips.

Kurzweil is so convinced that the first AI will be based on reverse engineering the brain that he doesn’t adequately grapple with the orthogonality thesis: the observation that intelligence and comprehensible (to humans) goals don’t need to be correlated. I see no reason to believe Kurzweil that the first super-intelligence will be based off a human. I think to believe that it would be based on a human, you’d have to assume that various university research projects will beat Google and Facebook (who aren’t trying to recreate functional human brains in silica) in the race to develop a general AI. I think that is somewhat unrealistic, especially if there are paths to general intelligence that look quite different from our brains.

Finally, I’m unhappy with how Kurzweil’s predictions are sprinkled throughout the book, vague, and don’t include confidence intervals. The only clear prediction I was able to find was Kurzweil’s infamously false assertion that by ~2010, our computers would be split up and worn with our clothing.

It would be much easier to assess Kurzweil’s accuracy as a predictor if he listed all of his predictions together in a single section, applied to them clear target dates (e.g. less vague than: “in the late 2020s”), and gave his credence (as it stands, it is hard to distinguish between things Kurzweil believes are very likely and things he views as only somewhat likely). Currently any attempts to assess Kurzweil’s accuracy are very sensitive to what you choose to view as “a prediction” and how you interpret his timing. More clarity would make this unambiguous.

Outside of Kurzweil’s personal suitability as an author and advocate (and his sagacity), I have one beef with singulatarian thought in general. It’s becoming increasingly clear that the silicon paradigm of computing will soon come to the end of its exponential growth. Switching to something like indium gallium arsenide and moving key processes to more optimized chips will buy a bit more time, but doesn’t represent a fundamental paradigm shift of the sort that gets us around the tunneling problem.

Furthermore, we’ve already began to bump up against the limit on clock speed in silicon; we can’t really run silicon chips at higher clock rates without melting them. This is unfortunate, because speed ups in clock time are much nicer than increased parallelism. Almost all programs benefit from quicker processing, while only certain programs benefit from increased parallelism. This isn’t an insurmountable obstacle when it comes to things like artificial intelligence (the human brain has a very slow clock speed and massive parallelism and it’s obviously good enough to get lots done), but it does mean that some things that Kurzweil were counting on to get quicker and quicker have stalled out (the book was written just as the Dennard Scaling began to break down).

All this means that the exponential growth that is supposed to drive the singularity is about to fizzle out… maybe. Kurzweil is convinced that the slowdown in silicon will necessarily lead to a paradigm shift to something else. But I’m not sure what it will be. He talks a bit about graphene, but when I was doing my degree in nanotechnology engineering, the joke among the professors was that graphene could do anything… except make it out of the lab.

Kurzweil has an almost religious faith that there will be another paradigm shift, keeping his exponential trend going strong. And I want to be really clear that I’m not saying there won’t be. I’m just saying there might not be. There is no law of the universe that says that we have to have convenient paradigm shifts. We could get stuck with linear (or even logarithmic) incremental improvements for years, decades, or even centuries before we resume exponential growth in computing power.

It does seem like ardent belief in the singularity might attract more religiously minded atheists. Kurzweil himself believes that it is our natural destiny to turn the whole universe into computational substrate. Identifying god with the most holy and perfect (in fine medieval tradition; there’s something reminiscent of Anselm in Kurzweil’s arguments), Kurzweil believes that once every atom in the universe sings with computation, we will have created god.

I don’t believe that humanity has any grand destiny, or that the arc of history bends towards anything at all in particular. And I by no means believe that the singularity is assured, technologically or socially. But it is a beautiful vision. Human flourishing, out to the very edges of the cosmos…

Yeah, I want that too. I’m a religiously minded atheist, after all.

In both disposition and beliefs, I’m far closer to Kurzweil than his many detractors. I think “degrowth” is an insane policy that if followed, would create scores of populist demagogues. I think that the Chinese room argument is good only for identifying people who don’t think systemically. I’m also more or less in agreement that government regulations won’t be able to stop a singularity (if one is going to occur because of continuing smooth acceleration in the price performance of information technology; regulation could catch up if a slowdown between paradigm shifts gives it enough time).

I think the singularity very well might happen. And at the end of the day, the only real difference between me and Kurzweil is that “might”.

Also: I repeat myself less.