Highly regarded AI researchers and futurists have provided answers that cover the extremes, and everything in between, regarding whether we can control the singularity. I will discuss some of these answers shortly, but let us start by reviewing what is meant by “singularity.” As first described by John von Neumann in 1955, the singularity represents a point in time when the intelligence of machines will greatly exceed that of humans. This simple understanding of the word does not seem to be particularly threatening. Therefore it is reasonable to ask why we should care about controlling the singularity.
The singularity poses a completely unknown situation. Currently we do not have any intelligent machines (those with strong AI) that are as intelligent as a human being let alone possess far-superior intelligence to that of humans. The singularity would represent a point in humankind’s history that never has occurred. In 1997 we experienced a small glimpse of what it might feel like, when IBM’s chess-playing computer Deep Blue became the first computer to beat world-class chess champion Garry Kasparov. Now imagine being surrounded by SAMs that are thousands of times more intelligent than you are, regardless of your expertise in any discipline. This may be analogous to humans’ intelligence relative to insects.
Your first instinct may be to argue that this is not a possibility. However, while futurists disagree on the exact timing when the singularity will occur, they almost unanimously agree it will occur. In fact the only thing they argue that could prevent it from occurring is an existential event (such as an event that leads to the extinction of humankind). I provide numerous examples of existential events in my book Unraveling the Universe’s Mysteries (2012). For clarity I will quote one here.
Nuclear war—For approximately the last forty years, humankind has had the capability to exterminate itself. Few doubt that an all-out nuclear war would be devastating to humankind, killing millions in the nuclear explosions. Millions more would die of radiation poisoning. Uncountable millions more would die in a nuclear winter, caused by the debris thrown into the atmosphere, which would block the sunlight from reaching the Earth’s surface. Estimates predict the nuclear winter could last as long as a millennium.
Essentially AI researchers and futurists believe that the singularity will occur, unless we as a civilization cease to exist. The obvious question is: “When will the singularity occur?” AI researchers and futurists are all over the map regarding this. Some predict it will occur within a decade; others predict a century or more. At the 2012 Singularity Summit, Stuart Armstrong, a University of Oxford James Martin research fellow, conducted a poll regarding artificial generalized intelligence (AGI) predictions (i.e., the timing of the singularity) and found a median value of 2040. Kurzweil predicts 2045. The main point is that almost all AI researchers and futurists agree the singularity will occur unless humans cease to exist.
Why should we be concerned about controlling the singularity when it occurs? There are numerous scenarios that address this question, most of which boil down to SAMs (i.e., strong artificially intelligent machines) claiming the top of the food chain, leaving humans worse off. We will discuss this further in part 2.
Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte
Good post! We are linking to this particularly great post
on our site. Keep up the great writing.