sciencemag.org SCIENCE 152 12 JANUARY 2018 • VOL 359 ISSUE 6372
| FEATURES | FRANKENSTEIN
Philosopher Nick Bostrom be- lieves it’s entirely possible that artificial intelligence (AI) could lead to the extinction of Homo sapiens. In his 2014 bestseller Superintelligence: Paths, Dangers, Strategies, Bostrom paints a dark scenario in which researchers cre- ate a machine capable of steadily
improving itself. At some point, it learns
to make money from online transactions
and begins purchasing goods and services
in the real world. Using mail-ordered DNA,
it builds simple nanosystems that in turn
create more complex systems, giving it ever
more power to shape the world.
Now suppose the AI suspects that humans might interfere with its plans, writes
For Bostrom and a number of other scientists and philosophers, such scenarios are
more than science fiction. They’re studying
which technological advances pose “
existential risks” that could wipe out humanity or at least end civilization as we know
it—and what could be done to stop them.
“Think of what we’re trying to do as providing a scientific red team for the things that
could threaten our species,” says philosopher Huw Price, who heads the Centre for
the Study of Existential Risk (CSER) here at
the University of Cambridge.