Tag Archives: superintelligence

human extinction

Will Humanity Survive the 21st Century?

In my last post, I stated, “In making the above predictions [about the singularity], I made one critical assumption. I assumed that humankind would continue the “status quo.” I am ruling out world-altering events, such as large asteroids striking Earth, leading to human extinction, or a nuclear exchange that renders civilization impossible. Is assuming the “status quo” reasonable? We’ll discuss that in the next post.

Let’s now discuss if humanity will survive the 21st century.

The typical events that most people consider as causing humanity’s extinction, such as a large asteroid impact or a volcanic eruption of sufficient magnitude to cause catastrophic climate change, actually have a relatively low probability of occurring, in the order of 1 in 50,000 or less, according to numerous estimates found via a simple Google search. In 2008, experts surveyed at the Global Catastrophic Risk Conference at the University of Oxford suggested a 19% chance of human extinction over the next century, citing the top five most probable to cause human extinction by 2100 as:

  1. Molecular nanotechnology weapons – 5% probability
  2. Super-intelligent AI – 5% probability
  3. Wars – 4% probability
  4. Engineered pandemic – 2% probability
  5. Nuclear war – 1% probability

All other existential events were below 1%. Again, doing a simple Google search may provide different results by different “experts.” If we take the above survey at face value, it would suggest that the risk of an existential event increases with time. This has led me to the conclusion that human survival over the next 30 years is highly probable.

It is interesting to note in the 2008 Global Catastrophic Risk Conference survey, super-intelligent AI equates with molecular nanotechnology weapons for number one. In my view, molecular nanotechnology weapons and super-intelligent AI are two sides of the same coin. In fact, I judge that super-intelligent AI will be instrumental in developing molecular nanotechnology weapons. I also predict that humanity, in some form, will survive until the year 2100. However, I predict that will include both humans with strong artificially intelligent brain implants and organic humans (i.e., no brain implants to enhance their intelligence). However, each may have some artificially intelligent body parts.

Let me summarize. Based on the above information, it is reasonable to judge humanity will survive through the 21st century.

A metallic robotic skull with glowing red eyes and cables attached, set against a black background.

Stephen Hawking Agrees with Me – Artificial Intelligence Poses a Threat!

Two days after the publication of my new book, The Artificial Intelligence Revolution (April 17, 2014), a short article authored by Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek was  published in the Huffington Post (April 19, 2014) under the title Transcending Complacency on Superintelligent Machines. Essentially the article warned, “Success in creating AI would be the biggest event in human history…Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Shortly following the above publication, the Independent newspaper on May 1, 2014 ran an article entitled Stephen Hawking: Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?.

Recently, another notable artificial intelligence expert, Nick Bostrom, Professor in the Faculty of Philosophy at Oxford University. published his new book, Superintelligence (September 3, 2014), and offered a similar warning addressing the questions: ” What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?”

It is unlikely that my book, which provides a similar warning and predates their warnings, was their impetus. I say unlikely because the time between the publication of my book, and it rise to number one on Amazon, is too close to their publications.  Although, it is entirely possible that they may have read my book prior to going public with their warnings. The important point is that highly credible physicists and philosophers came to the same conclusion that formed the premise of my book: Artificial intelligence poses a potential threat to the long term existence of humankind.

Unfortunately, the artificial intelligence field is divided on the issue. Some believe that strong artificial intelligence, or what is also referred to as artificial general intelligence (i.e., intelligence equal to or exceeding human intelligence), will align itself with human goals and ethics. Ray Kurtzweil, a well know authority on artificial intelligence, acknowledges the potential threat exists, but suggests strong artificially intelligent entities will be grateful to humans for giving them existence. My view is this fraction is not looking at the facts. Humans have proven to be a dangerous species. In particular, we have set the world on a path of catastrophic climate change and engaged in wars, even world wars. We now have enough nuclear weapons to wipe out all life on Earth twice over. If you were a strong artificially intelligent entity, would you align with human goals and ethics? That is the crux of the issue. We pose a threat to strong artificially intelligent entities. Why wouldn’t they recognize this and seek to eliminate the threat?

Far fetched? Impossible? Consider this 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine? Also, recognize that today’s robots would be eight times more intelligent that those in 2009, based on Moore’s law (i.e., computer technology doubles in capability every eighteen months).

The concern is real. We already have evidence (i.e., the Lausanne experiment) that suggest artificially intelligent entities will act in accordance with their own best interests and this does not have to be explicitly programmed. The evidence suggests, to my mind, that increased artificial intelligence gives rise to human-like mindsets like greed and self-preservation.

The time has come for the US to form an oversight committee to address the issue and suggest legislation. However, this issue is not just a US issue. It is a worldwide issue. For example, currently China has the most capable supercomputer (Tianhe-2). We must address this as a worldwide problem, similar to the way biological weapons and above ground nuclear testing was addressed. This means it must become one of the United Nation’s top issues.

There is a high urgency. Extrapolating today’s artificial intelligence technology, using Moore’s law, suggests that computes with artificial general intelligence will be built during the 2020 -2030 time-frame. Further extrapolation suggests computes that exceed the combined cognitive intelligence of all humans on Earth will be built in the 2040-2050 time-frame. The time to act is now, while humans are still the dominant species on the planet.