Aug 26, 2018 in Economics, Model
As interest in how artificial intelligence will change society increases, I’ve found it revealing to note what narratives people have about the future.
Some, like the folks at MIRI and OpenAI, are deeply worried that unsafe artificial general intelligences – an artificial intelligence that can accomplish anything a person can – represent an existential threat to humankind. Others scoff at this, insisting that these are just the fever dreams of tech bros. The same news organizations that bash any talk of unsafe AI tend to believe that the real danger lies in robots taking our jobs.
Let’s express these two beliefs as separate propositions:
Can you spot the contradiction between these two statements? In the common imagination, it would require an AI that can approximate human capabilities to drive significant unemployment. Given that humans are the largest existential risk to other humans (think thermonuclear war and climate change), how could equally intelligent and capable beings, bound to subservience, not present a threat?
Sep 12, 2017 in Literature, Science
I recently read The Singularity is Near as part of a book club and figured a few other people might benefit from hearing what I got out of it.
First – it was a useful book. I shed a lot of my skepticism of the singularity as I read it. My mindset has shifted from “a lot of this seems impossible” to “some of this seems impossible, but a lot of it is just incredibly hard engineering”. But that’s because I stuck with it – something that probably wouldn’t have happened without the structure of a book club.
I’m not sure Kurzweil is actually the right author for this message. Accelerando (by Charles Stross) covered much of the same material as Singularity, while being incredibly engaging. Kurzweil’s writing is technically fine – he can string a sentence together and he’s clear – but incredibly repetitious. If you read the introduction,...