Tag Archives: louis del monte

A cell phone and computer are connected to each other.

The Artificial Intelligence Revolution – Introduction

This excerpt is the introduction from my book, The Artificial Intelligence Revolution. Enjoy!

This book is a warning. Through this medium, I am shouting, “The singularity is coming.” The singularity (as first described by John von Neumann in 1955) represents a point in time when intelligent machines will greatly exceed human intelligence. It is, in the way of analogy, the start of World War III. The singularity has the potential to set off an intelligence explosion that can wield devastation far greater than nuclear weapons. The message of this book is simple but critically important. If we do not control the singularity, it is likely to control us. Our best artificial intelligence (AI) researchers and futurists cannot accurately predict what a post-singularity world may look like. However, almost all AI researchers and futurists agree it will represent a unique point in human evolution. It may be the best step in the evolution of humankind or the last step. As a physicist and futurist, I believe humankind will be better served if we control the singularity, which is why I wrote this book.

Unfortunately, the rise of artificial intelligence has been almost imperceptible. Have you noticed the word “smart” being used to describe machines? Often “smart” means “artificial intelligence.” However, few products are being marketed with the phrase “artificial intelligence.” Instead, they are called “smart.” For example, you may have a “smart” phone. It does not just make and answer phone calls. It will keep a calendar of your scheduled appointments, remind you to go to them, and give you turn-by-turn driving directions to get there. If you arrive early, the phone will help you pass the time while you wait. It will play games with you, such as chess, and depending on the level of difficulty you choose, you may win or lose the game. In 2011 Apple introduced a voice-activated personal assistant, Siri, on its latest iPhone and iPad products. You can ask Siri questions, give it commands, and even receive responses. Smartphones appear to increase our productivity as well as enhance our leisure. Right now, they are serving us, but all that may change.

A smartphone is an intelligent machine, and AI is at its core. AI is the new scientific frontier, and it is slowly creeping into our lives. We are surrounded by machines with varying degrees of AI, including toasters, coffeemakers, microwave ovens, and late-model automobiles. If you call a major pharmacy to renew a prescription, you likely will never talk with a person. The entire process will occur with the aid of a computer with AI and voice synthesis.

The word “smart” also has found its way into military phrases, such as “smart bombs,” which are satellite-guided weapons such as the Joint Direct Attack Munition (JDAM) and the Joint Standoff Weapon (JSOW). The US military always has had a close symbiotic relationship with computer research and its military applications. In fact, the US Air Force, starting in the 1960s, has heavily funded AI research. Today the air force is collaborating with private industry to develop AI systems to improve information management and decision making for its pilots. In late 2012 the science website www.phys.org reported a breakthrough by AI researchers at Carnegie Mellon University. Carnegie Mellon researchers, funded by the US Army Research Laboratory, developed an AI surveillance program that can predict what a person “likely” will do in the future by using real-time video surveillance feeds. This is the premise behind the CBS television program Person of Interest.

AI has changed the cultural landscape. Yet, the change has been so gradual that we hardly have noticed the major impact it has. Some experts, such as Ray Kurzweil, an American author, inventor, futurist, and the director of engineering at Google, predicted that in about fifteen years, the average desktop computer would have a mind of its own, literally. This computer will be your intellectual equal and will even have a unique personality. It will be self-aware. Instead of just asking simple questions about the weather forecast, you may be confiding your deepest concerns to your computer and asking it for advice. It will have migrated from personal assistant to personal friend. You likely will give it a name, much in the same way we name our pets. You will be able to program its personality to have interests similar to your own. It will have face-recognition software, and it will recognize you and call you by name, similar to the computer HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey. The conversations between you and your “personal friend” will appear completely normal. Someone in the next room who is not familiar with your voice will not be able to tell which voice belongs to the computer and which voice belongs to you.

By approximately the mid-twenty-first century, Kurzweil predicts that computers’ intelligence will exceed that of humans, and a $1,000 computer will match the processing power of all human brains on Earth. Although historically, predictions regarding advances in AI have tended to be overly optimistic, all indications are that Kurzweil is on target.

Many philosophical and legal questions will emerge regarding computers with artificial intelligence equal to or greater than that of the human mind (i.e., strong AI). Here are just a few questions we will ask ourselves after strong AI emerges:

  • Are strong-AI machines (SAMs) a new life-form?
  • Should SAMs have rights?
  • Do SAMs pose a threat to humankind?

It is likely that during the latter half of the twenty-first century, SAMs will design new and even more powerful SAMs with AI capabilities far beyond our ability to comprehend. They will perform a wide range of tasks, which will displace many jobs at all levels in the workforce, from bank tellers to neurosurgeons. New medical devices using AI will help the blind to see and the paralyzed to walk. Amputees will have new prosthetic limbs, with AI plugged directly into their nervous systems and controlled by their minds. The new prosthetic limb not only will replicate the lost limb but also be stronger, more agile, and superior in ways we cannot yet imagine. We will implant computer devices into our brains, expanding human intelligence with AI. Humankind and intelligent machines will begin to merge into a new species: cyborgs. It will happen gradually, and humanity will believe AI is serving us.

Computers with strong AI in the late twenty-first century, however, may see things differently. We may appear to those machines much the same way bees in a beehive appear to us today. We know we need bees to pollinate crops, but we still consider bees insects. We use them in agriculture, and we gather their honey. Although bees are essential to our survival, we do not offer to share our technology with them. If wild bees form a beehive close to our home, we may become concerned and call an exterminator.

Will the SAMs in the latter part of the twenty-first century become concerned about humankind? Our history proves we have not been a peaceful species. We have weapons capable of destroying all of civilization. We squander and waste resources. We pollute the air, rivers, lakes, and oceans. We often apply technology (such as nuclear weapons and computer viruses) without fully understanding the long-term consequences. Will SAMs in the late twenty-first century determine it is time to exterminate humankind or persuade humans to become cyborgs (i.e., humans with brains enhanced by implanted artificial intelligence and potentially having organ and limb replacements from artificially intelligent machines)? Will humans embrace the prospect of becoming cyborgs? Becoming a cyborg offers the opportunity to attain superhuman intelligence and abilities. Disease and wars may be just events stored in our memory banks and no longer threaten cyborgs. As cyborgs, we may achieve immortality.

According to David Hoskins’s 2009 article, “The Impact of Technology on Health Delivery and Access” (www.workers.org/2009/us/sickness_1231):

 

An examination of Centers for Disease Control statistics reveals a steady increase in life expectancy for the U.S. population since the start of the 20th century. In 1900, the average life expectancy at birth was a mere 47 years. By 1950, this had dramatically increased to just over 68 years. As of 2005, life expectancy had increased to almost 78 years.

 

Hoskins attributes increased life expectancy to advances in medical science and technology over the last century. With the advent of strong AI, life expectancy likely will increase to the point that cyborgs approach immortality. Is this the predestined evolutionary path of humans?

This may sound like a B science-fiction movie, but it is not. The reality of AI becoming equal to that of a human mind is almost at hand. By the latter part of the twenty-first century, the intelligence of SAMs likely will exceed that of humans. The evidence that they may become malevolent exists now, which I discuss later in the book. Attempting to control a computer with strong AI that exceeds current human intelligence by many folds may be a fool’s errand.

Imagine you are a grandmaster chess player teaching a ten-year-old to play chess. What chance does the ten-year-old have to win the game? We may find ourselves in that scenario at the end of this century. A computer with strong AI will find a way to survive. Perhaps it will convince humans it is in their best interest to become cyborgs. Its logic and persuasive powers may not only compelling but also irresistible.

Artificial intelligence is an embryonic reality today, but it is improving exponentially. By the end of the twenty-first century, we will have only one question regarding artificial intelligence: Will it serve us or replace us?

A military plane is flying in the sky.

Scenario: The Beginnings Of Modern Drone Warfare

This is a scenario that depicts the beginning of modern drone warfare. Please let me know if you like these fictional scenarios, which illustrate the technological elements of warfare.

I may kill someone today, thought USAF Lieutenant Andrew Martin, as he parked in his assigned space at Nellis Air Force Base, about a 15-minute drive from Las Vegas. It was 7:30 A.M., and the morning promised a bright cloudless 85-degree day, which the locals described as perfect golf weather. However, the “perfect golf weather” made little impact on his mood. He resigned himself to the twelve-hour shift ahead of him. His anxiety began to climb as soon as he turned off the car engine. Gathering the lunch his wife, Andrea, made him just before his departure for Nellis, he got out of the car. He headed toward USAF T-5, a windowless container about the size of a trailer. Inside, the air-conditioning was kept at 63 degrees for the benefit of the computers. Once he entered T-5, the door would remain shut for security reasons until he completed his shift. He knew that it would be at least twelve hours before he could head back home to have a late supper with his wife and tuck his four-year-old daughter, Megan, in bed with a kiss. Often his daughter would ask him to read her a few pages from her favorite book, A Bear Called Paddington, which he did despite the stress of war and his sleep deprivation. Twelve-hour shifts had become the norm as the number of drone missions outpaced the number of qualified drone crews.

Climbing up the steps of T-5, he entered his code on the door panel and waited for visual confirmation. The buzz released the door lock and allowed him to enter the dimly lit T-5, whose only light source was from the fourteen computer monitors within. His shift officially started at 8:00 A.M., and he robotically walked to his RPA (remotely piloted aircraft) station to relieve second Lieutenant Harrold Sevigny, or “Eyes,” a drone Sensor Operator. If your last name was difficult to pronounce, the unit always gave you a nickname. For some reason, though, Martin never got a nickname. Those that knew him called him Andy. After his briefing and reading the orders of the day, along with the chat on the monitors, he wrestled his John Wayne body into his cockpit chair, assuming the responsibilities of a Predator drone Sensor Operator. It was time to go to war.

A twelve-hour Predator mission requires a crew consisting of five members:

  1. The Mission Monitor (MM) is responsible for the entire mission.
  2. The Pilot flies the drone using a joystick and other appropriate instruments.
  3. The Sensor Operator (SO) controls the aircraft’s cameras, radar, and targeting systems.
  4. The Intelligence Operator (IO) performs a first analysis of the imagery.
  5. A Flight Engineer (FE) supervises the entire system.

To operate 24/7 required four aircraft and a team of 82 personnel, consisting of three crews, a safety officer, and in-theater maintenance techs. The popular notion held by many U.S. citizens was that one Predator drone required only one remote pilot. Nothing could be further from reality.

Martin’s orders for today’s mission were identical to his orders for the last week. Maintain surveillance of A-4. Martin had no idea of A-4’s real identity or what role he played, if any, in terrorism. Martin could only guess he was a high-value target, which they had been tracking in northern Iraq. Looking at five computer monitors twelve hours a day was nicknamed “Predator Porn.” Most of the time, it was routine, even dull, but occasionally the monitor screens were horrific, displaying the blood and guts you saw in a grade B zombie movie.

When Martin came on duty at 8:00 A.M. sharp, it was already 4:00 P.M. in Iraq. For the moment, A-4 was inside a small earth and grass hut in Tikrit, a stronghold of ISIS in northern Iraq. The distance between Nellis Air Force Base and Tikrit was over 7,000 miles, but the images on their monitors engulfed them to the point they felt they were flying in Tikrit.

Martin nodded to Lieutenant John Hales, the Predator pilot in the cockpit seat to his left. He and John had become close friends over the last two years. Martin felt that John was one of the few people he could talk to that understood the toll drone warfare took on a day-to-day basis. Like Martin, Hales was also married and had two daughters, ages four and six. Martin stopped telling his friends and family about his military assignment. Some trivialized the work, calling him an Xbox pilot. Others just politely thanked him “for his service.” Most people felt there was little to no stress in being a Sensor Operator in a drone crew. After all, the drone crew was thousands of miles from the “real” war zone.

It appeared that the Department of Defense agreed with the prevailing sentiments regarding drone crews in both the military and civilian population. In 2013, Defense Secretary Chuck Hagel rescinded a decision by his predecessor, Leon Panetta, who unveiled a “Distinguished Warfare Medal” outranking the Bronze Star and the Purple Heart, awarded to wounded troops. Instead, the Pentagon decided to create a “distinguishing device” that could be affixed to existing medals. Many military personnel and civilians took substantial issue with Panetta’s decision, which he intended to be a nod to the changing nature of warfare and represented the most substantial shakeup in the hierarchy of military medals since World War II. Even the Veterans of Foreign Wars, America’s largest combat veterans’ organization, strongly objected to the medal’s ranking. Martin felt few outside the drone units understood the level of stress and the toll it took on their lives. While it was true that strictly speaking, they were not in “harm’s way,” the warfare they waged was real. They routinely killed “enemy combatants,” supported ground troops, and saved lives.

Since assuming office in 2009, President Obama’s administration escalated “targeted killings,” primarily through increased unmanned drone strikes on al-Qaeda and the Taliban. By 2015, over 2500 enemy combatants had been killed by drone attacks. The September 2011 drone strike on Anwar al-Awlaki, an American-born Yemeni cleric and al-Qaeda in the Arabian Peninsula propagandist, was just one of the more publicized examples of how effective drones could target and neutralize “high value” enemy combatants. Based on their importance and high media profile, it would be natural to believe that many would opt to become drone crewmembers. However, the six-day workweek and twelve to fourteen-hour shifts of drone crews painted a different picture. The U.S. Air Force was short on drone crews, as drone missions unexpectedly increased when the U.S. began airstrikes in Iraq and Syria in 2014.

Martin and Hales worked well together and shared respect for each other’s roles. Recently, Hales’ family celebrated Independence Day with Martin’s family at an outdoor grilling at Martin’s home. Martin knew that Hales felt the stress and had become a binge weekend drinker but never overly indulged during the workweek. Most drone crewmembers self-medicated with alcohol and cigarettes. Martin was unusual in that he did not smoke and rarely had more than one beer on occasion.

After settling in his cockpit chair, the 27-year-old 6’2” Martin felt slightly cramped. His clean-cut looks and contagious smile, though, hid any hit of discomfort. His deep-set hazel blue eyes focused on the monitors. He had learned to do what most Sensor Operators (SOs) had learned. He could watch the screens, as if on autopilot, while thinking of almost anything else. Most of the time, he thought about his family. Occasionally, he gazed at their picture, which he always clipped to the left of his front monitor at the beginning of his shift. The hours passed. It was now 10:00 P.M. in northern Iraq, and the crew had switched to infrared.

We own the night, Martin thought.

He was right. The Predator sensors could see as well at night as they could in the day. In some respects, they could see even better at night. They could see anything that generated a heat signature, even a mouse. Martin, like many of his colleague SOs, even dreamed in infrared. The drone unit considered it a normal occupational consequence. Martin’s adrenaline was still high but suddenly spiked when a truck pulled up to the hut. Three people got out of the vehicle. Each appeared to be carrying a rifle, but that was a deduction. They could have been carrying shepherd’s staffs, for all he knew. In the sharp contrast of infrared, he watched the ghostly white figures quickly disappear into the hut. Beads of sweat began to form on his upper lip, despite the 63-degree room temperature—his heart began to race.

The Intelligence Officer (IO) asked the Flight Engineer (FE) if the system was operating within specification. The FE confirmed all systems were within spec. At that point, the IO quickly began an analysis of all known ISIS operatives in the area. Time began to dilate. Each minute felt like an hour. Within ten minutes, the IO gave his analysis to the Mission Monitor (MM), who quickly made a phone call. Martin could only guess that something big was going down.

Martin turned to the Hales. “What’s your take?”

Hales shrugged his shoulders. “Above my pay grade.”

Hales’ words were on the mark. The crew’s job did not include making decisions. For the moment, they could only wait in a holding pattern. The drone’s autopilot kept it within striking distance of the hut. The chat on one of the monitors began to spike. Speculations abounded—A-4 was holding a meeting with his direct reports, planning their next strike. Martin had to look away and force his focus on the hut. He noticed his hands beginning to tremble slightly. Hales’ body language also shouted danger. Neither spoke. Both Martin and Hales stared intently into their monitors.

The IO’s analysis, along with MM’s report, was sent to Operations Command, which could have been the ranking officer on the ground in Tikrit. Unfortunately, the crew had no idea where the reports went. Then the crew headsets came to life with a voice, an unknown chain of command from cyberspace, “Weapons confirmed.” At this point, a safety observer joined the crew to make sure any “weapon release” would be by the book.

The next command the crew received came calmly through their headsets, “Neutralize A-4 and other enemy combatants.” Oddly, the order was monotone, showing the same level of emotion as giving someone directions to the restroom.

The crew understood the command immediately and began a long verbal checklist. Martin locked his laser on the hut. The checklist neared its end with a verbal countdown, “Three…two…one.”

Hales pressed a button to release a Hellfire missile. The Hellfire flared to life, detached from its mount, and reached a supersonic speed in seconds.

Hales announced, “Missile off the rail and on route to target. ETA 15 seconds.”

Martin kept the targeting laser on the hut. All eyes were on the monitors. Each second now felt like a minute. After 5 seconds, the door to the hut suddenly opened and what appeared to be a small child looked out into the night. The crew knew they could divert the missile in all but the last few seconds. No order was given to do so. In an instant, the screen lit up with white flames. After images confirmed the hut and inhabitants destroyed.

Martin turned pale. He looked at Hales. “Did we just kill a child?”

Before Hales had a chance to reply, the crew heard over their headsets, “That was a dog.”

A dog that can open a door and stand on two legs? Martin thought.

No one commented.

The faceless commander announced, “Target Neutralized. Well done.”

Both Martin and Hales looked at each other. Each knew what the other was thinking, and no words were necessary.

The remainder of the mission consisted of assessing the damage. Nothing remained of the hut. The debris field was roughly circular. Body parts, still warm, lit up the infrared as far out as 300-meters. In the jargon of drone crews, these were “bug splats.” As Martin’s shift ended, most body parts had cooled to ground temperature and no longer gave an infrared signature. All was quiet in the aftermath. Each member of the drone crew would receive a positive entry on their record for killing four enemy combatants. At 8:00 P.M., after providing the routine debriefing to his replacement SO, Martin was relieved. His shift had ended. For him, the war was over for another twelve hours.

After two years of drone warfare, Martin knew that over the next week, they would maintain surveillance and wait for family, friends, and potential enemy combatants to visit the site to claim the remains for funeral arrangements. Martin also knew that they, and other drone crews, would maintain surveillance of the funerals to identify additional high-value targets. These activities were Top Secret and received no media attention. If high-value targets were identified, the cat and mouse game began anew. However, there were even worse alternatives. Martin had heard, via the grapevine, other drone crews were ordered to fire at the funeral gathering when several high-value targets attended. He thought, I hope it never comes to that for me.

On his drive home, Martin’s mind replayed the final infrared image of an open door and child staring into the night. When he got home, his wife was waiting for him with a hot dinner on the stove.

They were married for just over five years. Andrea’s statuette figure and pleasantly soft features immediately caught Andy’s eye during a University of Texas dance. They immediately found it easy to talk to one another and quickly became sweethearts. Andrea graduated with a B.S. degree in chemistry and taught at Austin High School, while Andy finished his M.S. degree in computer science at UT. Following Andy’s graduation from UT and his commission as a USAF second Lieutenant, they married. Andrea’s parents, Mildred and Joe, immediately liked the tall, dark-haired, and ruggedly handsome Lieutenant. Although soft-spoken, Martin had a reassuringly calm command presence. They appeared to be the perfect couple.

As he walked toward her, Andrea smiled. “How did it go today?” She was attempting to make polite conversation and could sense her husband had a nightmare day.

“About normal,” he replied softly. He knew he could not talk about the events that took place in T-5, even though he and Andrea kept no other secrets from each other. So it was probably just as well that the horrors he experienced in T-5 remained locked in his mind. “Just another day at the office,” he added with a forced smile.

She looked at him with her puppy brown eyes and smiled back. “Megan wants you to tuck her in.”

“Will do.”

He quietly walked to Megan’s room. As he entered, her eyes lit up. “Daddy, daddy, guess what we did today.”

When he looked at Megan, his mind pictured Andrea at four years old. He smiled. “Was it something nice?”

“Yes, mommy taught me how to cook an egg in water.”

“That’s wonderful. Maybe you could cook one for me tomorrow.”

Megan hugged him and said, “I will.” Then she looked at her Dad with her soulful brown eyes and asked, “Read me more about Paddington?”

“Okay, honey, but just a few pages. It’s getting way past your bedtime, and tomorrow is a school day.”

With that, he reached for the book on her night table, opened to the bookmark, and began reading softly. Soon Megan drifted into sleep.

Quietly leaving Megan’s room, he joined his wife, who had set the table for a late dinner for two. She made his favorite meal, spaghetti, and meatballs. He sat down and, finally feeling at ease, asked, “How was your day?”

Her words flowed over him like a comforter on a cold winter’s eve. He slowly ate his dinner and wondered what dreams would come that night.

 

A large city with a bunch of flying objects

Directed Energy Weapons

This is an excerpt from Chapter 1 of my new book, War at the Speed of Light. Enjoy!

The devastation of war is always about energy. This statement is true historically, as well as today. For example, most of the massive destruction during World War II resulted from dropping conventional bombs on an adversary. To understand the role energy plays in this type of devastation, consider the Japanese attack on Pearl Harbor. On December 7, 1941, Imperial Japan launched 353 bombers and torpedo bombers in two waves from six aircraft carriers.[i] Their bombs and torpedoes incorporated Trinitroanisole, a chemical compound.[ii] The vast devastation caused by unleashing the energy in Trinitroanisole’s chemical compound resulted in sinking twelve ships and damaging nine others.[iii] The attacks also destroyed one hundred and sixty aircraft and damaged another one hundred fifty.[iv] Over two thousand three hundred Americans lost their lives during the attack.[v]

A near-perfect example of energy’s devastation is the atomic bombings of Hiroshima and Nagasaki on August 6 and 9, 1945, respectively. These bombs were different from those that preceded them. They derived their destructive force from nuclear fission or the splitting of atoms. In simple terms, it requires energy to hold an atom together. A fast-moving subatomic particle causes the atom to split into its subatomic particles, termed nuclear fission, releasing the energy binding the atom together. We know from Einstein’s famous mass-energy equivalent formula E = mc2 that even a small amount of mass (m) converted to energy (E) yields an enormous amount of energy. The reason for this is that mass is multiplied by the speed of light (c) squared (i.e., times itself). The velocity of light is a large number approximately equal to 186,000 miles per second. Doing the math yields an enormous amount of energy from a relatively small amount of mass. Examining the bombs demonstrates this point. Each used fissionable material measuring less than two hundred pounds yet unleashed the devastation of fifteen to twenty thousand tons of TNT.

I know it is unusual to think about destruction as related to energy, but that is a fact of war. From the first caveman that used a rock to kill an adversary to a sniper’s bullet, it all has to do with energy. In the case of the rock and bullet, their kinetic energy (a function of their mass and velocity) inflicts wounds. Think of any weapon, except biological and chemical weapons, from the earliest of times to the present, and you face one inescapable conclusion; it relies on some form of energy to carry out its mission.

If you are a Star Trek fan, you are aware that the Starship Enterprise and its crew did not use anything that resembled conventional weapons, such as guns or nuclear weapons. Also, the Enterprise did not have traditional armor plating. In the science fiction series Star Trek, we see the crew using handheld phasers, which could be set to kill or stun. The phasers, set to kill, are a fictional extrapolation of real-life lasers. When set to stun, the phasers are comparable to real-life microwave weapons that have a stunning effect.[vi] In place of missiles, the Enterprise fired photon torpedoes. These are similar to the missiles military warplanes and warships fire, except the warhead is not a conventional or nuclear explosive. The photon torpedo warhead consisted of antimatter, which has the destructive property of annihilating matter (i.e., converting it to energy). Lastly, in place of armor plating, the Enterprise used a fictional force field to shield the ship, which is similar to the real-life Active Protection Systems[vii] deployed to protect US military vehicles. In essence, Gene Roddenberry’s Star Trek exposed its viewers to directed energy weapons.

1 Mark Parillo, Why Air Forces Fail: the Anatomy of Defeat, (The University Press of Kentucky, 2006): 288

[ii] Mark Chambers, Wings of the Rising Sun: Uncovering the Secrets of Japanese Fighters and Bombers of World War II, (Osprey Publishing, 2018): 282

[iii] The Library of Congress, “The Japanese Attacked Pearl Harbor December 7, 1941,” http://www.americaslibrary.gov/jb/wwii/jb_wwii_pearlhar_1.html (accessed December 17, 2018)

[iv] Library of Congress, “The Japanese Attacked”

[v] Library of Congress, “The Japanese Attacked”

[vi] David Martin, “The Pentagon’s Ray Gun,” CBSN, February 29, 2008, https://www.cbsnews.com/news/the-pentagons-ray-gun

[vii]  Allison Barrie, “’Force field’ technology could make US tanks unstoppable,” Fox News, August 2, 2018, https://www.foxnews.com/tech/force-field-technology-could-make-us-tanks-unstoppable (accessed December 18, 2018)

A robot is standing in front of a computer screen.

Winning The Superintelligence War

Today, no legislation limits the amount of intelligence that an AI machine may possess. Many researchers, including me, have warned that the “intelligence explosion,” forecasted to begin mid-twenty-first century, will result in self-improving AI that could quickly become vastly more powerful than humans intelligence. This book argues, based on fact, that such strong AI machines (SAMs) would act in their own best interests. The 2009 experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland is an excellent example. Robots programmed to cooperate eventually learned deceit in an attempt to hoard beneficial resources. This experiment implies even rudimentary robots can learn deceit, greed and seek self-preservation.

I was one of the first to write a book dedicated to the issue of humanity falling victim to artificially intelligent machines, The Artificial Intelligence Revolution (April 2014). Since its publication, others in the scientific community, like world-famous physicist Stephen Hawkins, have expressed similar sentiments. The Oxford philosopher Nick Bostrom, in his book, Superintelligence: Paths, Dangers, Strategies (September 2014), has also addressed the issue, and like me, argues that artificial intelligence could result in human extinction.

The real question is, “What do we do to prevent the extinction of humanity via our own invention, strong artificially intelligent machines (SAMs)?” Unlike some that have “danced” around the issue, suggesting various potential paths, I intend to be didactically clear. I make no claim my approach is the only approach to resolve the issue. However, I believe that my approach will address the issues and provide a high probability of avoiding human extinction via artificial intelligence. I advocate a four-fold approach.

First, we need legislation that controls the development and manufacture of AI. We need to ensure that an intelligence explosion is not accidentally initiated and humanity does not lose control of AI technology. I do not think it is realistic to believe we can rely on those industries engaged in developing AI to police themselves. Ask yourself a simple question, “Would you be comfortable living next to a factory that produces biological weapons, whose only safeguards were self-imposed?” I doubt many of us would. However, that is the situation we currently face with companies engaged in artificial intelligence development and manufacture. By way of analogy, we have the cliché “fox guarding the chicken coop.”

Second, we need objective oversight that assures compliance to all legislation and treaties governing AI. Similar to nuclear and biological weapons, this is not solely a United States problem. It is a worldwide issue. As such, it will require international cooperation, expressed in treaties. The task is immense, but not without precedent. Nations have established similar treaties to curtail the spread of nuclear weapons, biological weapons, and above-ground nuclear weapon testing.

Third, we must build any safeguards to protect humanity in the hardware, not just the software. In my first book, The Artificial Intelligence Revolution, I termed such hardware “Asimov chips,” which I envisioned to be integrated circuits that represented Asimov’s three laws of robotics in hardware integrated circuits. In addition, we must ensure we have a failsafe way for humanity to shut down any SAM that we deem a threat.

Fourth, we need to inhibit brain implants that greatly enhance human intelligence and allow wireless interconnectivity with SAMs until we know with certainty that SAMs are under humanity’s control and that such implants would not destroy the recipient’s humanity.

I recognize that the above steps are difficult. However, I believe they represent the minimum required to assure humanity’s survival in the post-singularity world.

Could I be wrong? Although I believe my technology forecasts and the dangers that strong AI poses are real, I freely admit I could be wrong. However, ask yourself this question, “Are you willing to risk your future, your children’s future, your grandchildren’s future, and the future of humanity on the possibility I may be wrong?”  Properly handled, we could harvest immense benefits from SAMs. However, if we continue the current course, humanity may end up a footnote in some digital database by the end of the twenty-first century.

A city is burning down and people are walking.

Assuring the Survival of Humanity In The Post Singularity Era

How do we assure that we do not fall victim to our own invention, artificial intelligence? What strategies should we employ? What actions should we take?

What is required is a worldwide recognition of the danger that strong AI poses and a worldwide coalition to address it. This is not a U.S. problem. It is a worldwide problem. It would be no different from any threat that could result in the extinction of humanity.

Let us consider the example President Regan provided during his speech before the United Nations in 1987. He stated, “Perhaps we need some outside universal threat to make us recognize this common bond. I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world.”

I offer the above example to illustrate that we need humanity, all nations of the world, to recognize the real and present danger that strong AI poses. We need world leaders to take a proactive stance. That could, for example, require assembling the best scientists, military and civilian leaders to determine the type of legislation needed to govern the development of advanced artificially intelligent computers and weapon systems. It could involve multinational oversight to assure compliance with the legislation. Is the task monumental? Yes, but do we really have another alternative? If we allow the singularity to occur without control, our extinction is inevitable. In time, the Earth will become home to only machines. The existence of humanity will be digital bits of information in some electronic memory depository.

I harbor hope that humanity, as a species, can unite to prevent our extinction. There are historical precedents. Let me provide two examples.

Example 1. The Limited Test Ban Treaty (LTBT) – The treaty banned nuclear weapon tests in the atmosphere, in outer space, and underwater. It was signed and ratified by the former Soviet Union, the United Kingdom, and the United States in 1963. It had two objectives:

    1. Slow the expensive arms race between the Soviet Union and the United States
    2. Stop the excessive release of nuclear fallout into Earth’s atmosphere

Currently, most countries have signed the treaty. However, China, France, and North Korea are countries known to have tested nuclear weapons below ground and have not signed the treaty.

In general, the LTBT has held well, even by countries that have not signed the treaty. There have been several violations by both the former Soviet Union and the United States. However, for almost the last fifty years, no nuclear tests have violated the treaty. This means that the fallout from the nuclear tests did not exceed the borders of the countries performing the tests.

Why has the LTBT been so successful? Nations widely recognized atmospheric nuclear tests as dangerous to humanity due to the uncontrollable nature of the radioactive fallout.

Example 2. The Biological Weapons Convention – In a 1969 press conference, President Richard M. Nixon stated, “Biological weapons have massive, unpredictable, and potentially uncontrollable consequences.” He added, “They may produce global epidemics and impair the health of future generations.” In 1972, President Nixon submitted the Biological Weapons Convention to the U.S. Senate.

The “Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on Their Destruction” proceeded to become an international treaty.

    • Signed in Washington, London, and Moscow on April 10, 1972
    • Ratification advised by the US Senate on December 16, 1974
    • Ratified by the US president January 22, 1975
    • US ratification deposited in Washington, London, and Moscow on March 26, 1975
    • Proclaimed by the US president March 26, 1975
    • Entered into force March 26, 1975

The above two examples prove one thing to my mind. If humanity recognizes a possible existential threat, it will act to mitigate it.

Unfortunately, while several highly-regarded scientists and notable public figures have added their voice to mine regarding the existential threat artificial intelligence poses, it has failed to become widely recognized.

I am written several books to delineate this threat, including The Artificial Intelligence Revolution, Genius Weapons, Nanoweapons, and War At The Speed Of Light. My goal is to reach the largest audience possible and raise awareness regarding the existential threat to humanity that artificial intelligence poses.

In the simplest terms, I advocate that the path toward a solution is educating the lay public and those in leadership positions. Once the existential threat that artificial intelligence poses becomes widely recognized, I harbor hope that humanity will seek solutions to mitigate the threat.

In the next post, I delineate a four-fold approach to mitigate the threat that artificial intelligence poses to humanity. There may be other solutions. I do not claim that this is the only way to address the problem. However, I’m afraid I have to disagree with those that suggest we do not have a problem. In fact, I claim that we not only have a potentially serious problem, but also we need to address it post-haste. If I am coming across with a sense of urgency, it is intentional. At best, we have one or two decades after the singularity to assure we do not fall victim to our own invention, artificial intelligence.