Today, no legislation limits the amount of intelligence that an AI machine may possess. Many researchers, including me, have warned that the “intelligence explosion,” forecasted to begin mid-twenty-first century, will result in self-improving AI that could quickly become vastly more powerful than humans intelligence. This book argues, based on fact, that such strong AI machines (SAMs) would act in their own best interests. The 2009 experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland is an excellent example. Robots programmed to cooperate eventually learned deceit in an attempt to hoard beneficial resources. This experiment implies even rudimentary robots can learn deceit, greed and seek self-preservation.
I was one of the first to write a book dedicated to the issue of humanity falling victim to artificially intelligent machines, The Artificial Intelligence Revolution (April 2014). Since its publication, others in the scientific community, like world-famous physicist Stephen Hawkins, have expressed similar sentiments. The Oxford philosopher Nick Bostrom, in his book, Superintelligence: Paths, Dangers, Strategies (September 2014), has also addressed the issue, and like me, argues that artificial intelligence could result in human extinction.
The real question is, “What do we do to prevent the extinction of humanity via our own invention, strong artificially intelligent machines (SAMs)?” Unlike some that have “danced” around the issue, suggesting various potential paths, I intend to be didactically clear. I make no claim my approach is the only approach to resolve the issue. However, I believe that my approach will address the issues and provide a high probability of avoiding human extinction via artificial intelligence. I advocate a four-fold approach.
First, we need legislation that controls the development and manufacture of AI. We need to ensure that an intelligence explosion is not accidentally initiated and humanity does not lose control of AI technology. I do not think it is realistic to believe we can rely on those industries engaged in developing AI to police themselves. Ask yourself a simple question, “Would you be comfortable living next to a factory that produces biological weapons, whose only safeguards were self-imposed?” I doubt many of us would. However, that is the situation we currently face with companies engaged in artificial intelligence development and manufacture. By way of analogy, we have the cliché “fox guarding the chicken coop.”
Second, we need objective oversight that assures compliance to all legislation and treaties governing AI. Similar to nuclear and biological weapons, this is not solely a United States problem. It is a worldwide issue. As such, it will require international cooperation, expressed in treaties. The task is immense, but not without precedent. Nations have established similar treaties to curtail the spread of nuclear weapons, biological weapons, and above-ground nuclear weapon testing.
Third, we must build any safeguards to protect humanity in the hardware, not just the software. In my first book, The Artificial Intelligence Revolution, I termed such hardware “Asimov chips,” which I envisioned to be integrated circuits that represented Asimov’s three laws of robotics in hardware integrated circuits. In addition, we must ensure we have a failsafe way for humanity to shut down any SAM that we deem a threat.
Fourth, we need to inhibit brain implants that greatly enhance human intelligence and allow wireless interconnectivity with SAMs until we know with certainty that SAMs are under humanity’s control and that such implants would not destroy the recipient’s humanity.
I recognize that the above steps are difficult. However, I believe they represent the minimum required to assure humanity’s survival in the post-singularity world.
Could I be wrong? Although I believe my technology forecasts and the dangers that strong AI poses are real, I freely admit I could be wrong. However, ask yourself this question, “Are you willing to risk your future, your children’s future, your grandchildren’s future, and the future of humanity on the possibility I may be wrong?” Properly handled, we could harvest immense benefits from SAMs. However, if we continue the current course, humanity may end up a footnote in some digital database by the end of the twenty-first century.