Should We Fear Artificial Superintelligence? A Response

3 min read

Artificial Superintelligence

February 23, 2019 John Loeffler posted an interesting article on interestingengineering.com. The post is called “Should We Fear Artificial Superintelligence?”, and in it, Loeffler argues that while Artificial Superintelligence (ASI) can be dangerous, one should mostly be optimistic about it. As a futurist, I am concerned about the possibility of (near-term) human extinction, and I think ASI is one of the greatest dangers we face. Therefore, I appreciate it when people think about this, as much of humanity’s concern seems to go to relatively unimportant things. But while Loeffler and I are both optimistic, we are so for different reasons,…...

This article is free to read

Login to read the full article


OR
Hein de Haan My name is Hein de Haan. An Artificial Intelligence expert, I am concerned with the future of humanity. As a result, I try to study as much as possible about many different topics in order to have a positive impact on society.

Follow DDI

Gain Access to Expert Views

We won't send you spam. Unsubscribe at any time.