How do we assure that we do not fall victim to our own invention, artificial intelligence? What strategies should we employ? What actions should we take?

What is required is a worldwide recognition of the danger that strong AI poses and a worldwide coalition to address it. This is not a U.S. problem. It is a worldwide problem. It would be no different from any threat that could result in the extinction of humanity.

Let us consider the example President Regan provided during his speech before the United Nations in 1987. He stated, “Perhaps we need some outside universal threat to make us recognize this common bond. I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world.”

I offer the above example to illustrate that we need humanity, all nations of the world, to recognize the real and present danger that strong AI poses. We need world leaders to take a proactive stance. That could, for example, require assembling the best scientists, military and civilian leaders to determine the type of legislation needed to govern the development of advanced artificially intelligent computers and weapon systems. It could involve multinational oversight to assure compliance with the legislation. Is the task monumental? Yes, but do we really have another alternative? If we allow the singularity to occur without control, our extinction is inevitable. In time, the Earth will become home to only machines. The existence of humanity will be digital bits of information in some electronic memory depository.

I harbor hope that humanity, as a species, can unite to prevent our extinction. There are historical precedents. Let me provide two examples.

Example 1. The Limited Test Ban Treaty (LTBT) – The treaty banned nuclear weapon tests in the atmosphere, in outer space, and underwater. It was signed and ratified by the former Soviet Union, the United Kingdom, and the United States in 1963. It had two objectives:

    1. Slow the expensive arms race between the Soviet Union and the United States
    2. Stop the excessive release of nuclear fallout into Earth’s atmosphere

Currently, most countries have signed the treaty. However, China, France, and North Korea are countries known to have tested nuclear weapons below ground and have not signed the treaty.

In general, the LTBT has held well, even by countries that have not signed the treaty. There have been several violations by both the former Soviet Union and the United States. However, for almost the last fifty years, no nuclear tests have violated the treaty. This means that the fallout from the nuclear tests did not exceed the borders of the countries performing the tests.

Why has the LTBT been so successful? Nations widely recognized atmospheric nuclear tests as dangerous to humanity due to the uncontrollable nature of the radioactive fallout.

Example 2. The Biological Weapons Convention – In a 1969 press conference, President Richard M. Nixon stated, “Biological weapons have massive, unpredictable, and potentially uncontrollable consequences.” He added, “They may produce global epidemics and impair the health of future generations.” In 1972, President Nixon submitted the Biological Weapons Convention to the U.S. Senate.

The “Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on Their Destruction” proceeded to become an international treaty.

    • Signed in Washington, London, and Moscow on April 10, 1972
    • Ratification advised by the US Senate on December 16, 1974
    • Ratified by the US president January 22, 1975
    • US ratification deposited in Washington, London, and Moscow on March 26, 1975
    • Proclaimed by the US president March 26, 1975
    • Entered into force March 26, 1975

The above two examples prove one thing to my mind. If humanity recognizes a possible existential threat, it will act to mitigate it.

Unfortunately, while several highly-regarded scientists and notable public figures have added their voice to mine regarding the existential threat artificial intelligence poses, it has failed to become widely recognized.

I am written several books to delineate this threat, including The Artificial Intelligence Revolution, Genius Weapons, Nanoweapons, and War At The Speed Of Light. My goal is to reach the largest audience possible and raise awareness regarding the existential threat to humanity that artificial intelligence poses.

In the simplest terms, I advocate that the path toward a solution is educating the lay public and those in leadership positions. Once the existential threat that artificial intelligence poses becomes widely recognized, I harbor hope that humanity will seek solutions to mitigate the threat.

In the next post, I delineate a four-fold approach to mitigate the threat that artificial intelligence poses to humanity. There may be other solutions. I do not claim that this is the only way to address the problem. However, I’m afraid I have to disagree with those that suggest we do not have a problem. In fact, I claim that we not only have a potentially serious problem, but also we need to address it post-haste. If I am coming across with a sense of urgency, it is intentional. At best, we have one or two decades after the singularity to assure we do not fall victim to our own invention, artificial intelligence.