Browse posts by tag: ugh do i seriously have an ai tag now how rationalist of me

Aug 26, 2018 in Economics, Model

You Shouldn’t Believe In Technological Unemployment Without Believing In Killer AI

As interest in how artificial intelligence will change society increases, I’ve found it revealing to note what narratives people have about the future.

Some, like the folks at MIRI and OpenAI, are deeply worried that unsafe artificial general intelligences – an artificial intelligence that can accomplish anything a person can – represent an existential threat to humankind. Others scoff at this, insisting that these are just the fever dreams of tech bros. The same news organizations that bash any talk of unsafe AI tend to believe that the real danger lies in robots taking our jobs.

Let’s express these two beliefs as separate propositions:

  1. It is very unlikely that AI and AGI will pose an existential risk to human society.
  2. It is very likely that AI and AGI will result in widespread unemployment.

Can you spot the contradiction between these two statements? In the common imagination, it would require an AI that can approximate human capabilities to drive significant unemployment. Given that humans are the largest existential risk to other humans (think thermonuclear war and climate change), how could equally intelligent and capable beings, bound to subservience, not present a threat?

Read more