What is the driving force behind autonomous weapons? There are two forces driving these weapons:

  1. Technology: AI technology, which provides the intelligence of autonomous weapon systems (AWS), is advancing exponentially. Experts in AI predict autonomous weapons, which would select and engage targets without human intervention, will debut within years, not decades. Indeed, a limited number of autonomous weapons already exist. For now, they are the exception. In the future, they will dominate conflict.
  2. Humanity: In 2016, the World Economic Forum Matters (WFM) attendees were asked, “If your country was suddenly at war, would you rather be defended by the sons and daughters of your community, or an autonomous A.I. weapons system?” The majority, 55%, responded they would prefer artificially intelligent (AI) soldiers. This result suggests a worldwide desire to have robots, sometimes referred to as “killer robots,” fight wars rather than risking human lives.

The use of AI technology in warfare is not new. The first large-scale use of “smart bombs” by the United States during Operation Desert Storm in 1991 made it apparent that AI had the potential to change the nature of war. The word “smart” in this context means “artificially intelligent.” The world watched in awe as the United States demonstrated the surgical precision of smart bombs, which neutralized military targets and minimized collateral damage. In general, using autonomous weapon systems in conflict offers highly attractive advantages:

  • Economic: Reducing costs and personnel
  • Operational: Increasing the speed of decision-making, reducing dependence on communications, reducing human errors
  • Security: Replacing or assisting humans in harm’s way
  • Humanitarian: Programming killer robots to respect the international humanitarian laws of war better than humans

Even with these advantages, there are significant downsides. For example, when warfare becomes just a matter of technology, will it make engaging in war more attractive? No commanding officer has to write a letter to the mothers and fathers, wives and husbands, of a drone lost in battle. Politically, it is more palatable to report equipment losses than human causalities. In addition, a country with superior killer robots has both a military advantage and a psychological advantage. To understand this, let us examine the second question posed to attendees of 2016 World Economic Forum Matters7: “If your country was suddenly at war, would you rather be invaded by the sons and daughters of your enemy, or an autonomous A.I. weapon system?” A significant majority, 66%, responded a preference for human soldiers.

In May 2014, a Meeting of Experts on Lethal Autonomous Weapons Systems was held at the United Nations in Geneva8 to discuss the ethical dilemmas such weapons systems pose, such as:

  • Can sophisticated computers replicate the human intuitive moral decision-making capacity?
  • Is human intuitive moral perceptiveness ethically desirable? If the answer is yes, then the legitimate exercise of deadly force should always require human control.
  • Who is responsible for the actions of a lethal autonomous weapons system? If the machine is following a programmed algorithm, is the programmer responsible? If the machine is able to learn and adapt, is the machine responsible? Is the operator or country that deploys LAWS (i.e., lethal autonomous weapon systems) responsible?

In general, there is a worldwide growing concern with regard to taking humans “out of the loop” in the use of legitimate lethal force.

This is an excerpt from my new book, Genius Weapons, now on sale on Amazon. Give yourself and others the gift of knowledge.