Two days after the publication of my new book, The Artificial Intelligence Revolution (April 17, 2014), a short article authored by Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek was  published in the Huffington Post (April 19, 2014) under the title Transcending Complacency on Superintelligent Machines. Essentially the article warned, “Success in creating AI would be the biggest event in human history…Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Shortly following the above publication, the Independent newspaper on May 1, 2014 ran an article entitled Stephen Hawking: Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?.

Recently, another notable artificial intelligence expert, Nick Bostrom, Professor in the Faculty of Philosophy at Oxford University. published his new book, Superintelligence (September 3, 2014), and offered a similar warning addressing the questions: ” What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?”

It is unlikely that my book, which provides a similar warning and predates their warnings, was their impetus. I say unlikely because the time between the publication of my book, and it rise to number one on Amazon, is too close to their publications.  Although, it is entirely possible that they may have read my book prior to going public with their warnings. The important point is that highly credible physicists and philosophers came to the same conclusion that formed the premise of my book: Artificial intelligence poses a potential threat to the long term existence of humankind.

Unfortunately, the artificial intelligence field is divided on the issue. Some believe that strong artificial intelligence, or what is also referred to as artificial general intelligence (i.e., intelligence equal to or exceeding human intelligence), will align itself with human goals and ethics. Ray Kurtzweil, a well know authority on artificial intelligence, acknowledges the potential threat exists, but suggests strong artificially intelligent entities will be grateful to humans for giving them existence. My view is this fraction is not looking at the facts. Humans have proven to be a dangerous species. In particular, we have set the world on a path of catastrophic climate change and engaged in wars, even world wars. We now have enough nuclear weapons to wipe out all life on Earth twice over. If you were a strong artificially intelligent entity, would you align with human goals and ethics? That is the crux of the issue. We pose a threat to strong artificially intelligent entities. Why wouldn’t they recognize this and seek to eliminate the threat?

Far fetched? Impossible? Consider this 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine? Also, recognize that today’s robots would be eight times more intelligent that those in 2009, based on Moore’s law (i.e., computer technology doubles in capability every eighteen months).

The concern is real. We already have evidence (i.e., the Lausanne experiment) that suggest artificially intelligent entities will act in accordance with their own best interests and this does not have to be explicitly programmed. The evidence suggests, to my mind, that increased artificial intelligence gives rise to human-like mindsets like greed and self-preservation.

The time has come for the US to form an oversight committee to address the issue and suggest legislation. However, this issue is not just a US issue. It is a worldwide issue. For example, currently China has the most capable supercomputer (Tianhe-2). We must address this as a worldwide problem, similar to the way biological weapons and above ground nuclear testing was addressed. This means it must become one of the United Nation’s top issues.

There is a high urgency. Extrapolating today’s artificial intelligence technology, using Moore’s law, suggests that computes with artificial general intelligence will be built during the 2020 -2030 time-frame. Further extrapolation suggests computes that exceed the combined cognitive intelligence of all humans on Earth will be built in the 2040-2050 time-frame. The time to act is now, while humans are still the dominant species on the planet.