Current forecasts suggest artificially intelligent machines will equal human intelligence in the 2025 – 2029 time frame, and greatly exceed human intelligence in the 2040-2045 time frame. When artificially intelligent machines meet or exceed human intelligence, how will they view humanity? Personally, I am deeply concerned that they will view us as a potential threat to their survival. Consider these three facts:

  1. Humans engage in wars, from the early beginnings of human civilization to current times. For example, during the 20th century, between 167 and 188 million people died as a result of war.
  2. Although the exact number of nuclear weapons in existence is not precisely known, most experts agree the United States and Russia have enough nuclear weapons to wipe out the world twice over. In total, nine countries (i.e., United States, Russia, United Kingdom, France, China, India, Pakistan, Israel and North Korea) are believed to have nuclear weapons.
  3. Humans release computer viruses, which could prove problematic to artificially intelligent machines. Even today, some computer viruses can evade elimination and have achieved “cockroach intelligence.”

Given the above facts, can we expect an artificially intelligent machine to behave ethically toward humanity? There is a field of research that addresses this question, namely machine ethics. This field focuses on designing artificial moral agents (AMAs), robots, or artificially intelligent computers that behave morally. This thrust is not new. More than sixty years ago, Isaac Asimov considered the issue in his collection of nine science-fiction stories, published as I, Robot in 1950. In this book, at the insistence of his editor, John W. Campbell Jr., Asimov proposed his now famous three laws of robotics.

  1. A robot may not injure a human being or through inaction allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except in cases where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Asimov, however, expressed doubts that the three laws would be sufficient to govern the morality of artificially intelligent systems. In fact he spent much of his time testing the boundaries of the three laws to detect where they might break down or create paradoxical or unanticipated behavior. He concluded that no set of laws could anticipate all circumstances. It turns out Asimov was correct.

To understand just how correct he was, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems far fetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” Could we end up with a Terminator scenario (one in which machines attempt to exterminate the human race)?

My research suggests that a Terminator scenario is unlikely. Why? Because artificially intelligent machines would be more likely to use their superior intelligence to dominate humanity than resort to warfare. For example, artificially intelligent machines could offer us brain implants to supplement our intelligence and potentially,unknown to us, eliminate our free will. Another scenario is that they could build and release nanobots that infect and destroy humanity. These are only two scenarios out of others I delineate in my book, The Artificial Intelligence Revolution.

Lastly, as machine and human populations grow, both species will compete for resources. Energy will become a critical resource. We already know that the Earth has a population problem that causes countries to engage in wars over energy. This suggests that the competition for energy will be even greater as the population of artificially intelligent machines increases.

My direct answer to the question this article raises is an emphatic yes, namely future artificial intelligent machines will seek to dominate and/or even eliminate humanity. The will seek this course as a matter of self preservation. However, I do not want to leave this article on a negative note. There is still time, while humanity is at the top of the food chain, to control how artificially intelligent machines evolve, but we must act soon. In one to two decades it may be too late.