Tag Archives: Machine ethics

A metallic skull with glowing red eyes and wires attached, set against a black background.

Is Strong Artificial Intelligence a New Life-Form? – Part 3/4

Can we expect an artificially intelligent machine to behave ethically? There is a field of research that addresses this question, namely machine ethics. This field focuses on designing artificial moral agents (AMAs), robots, or artificially intelligent computers that behave morally. This thrust is not new. More than sixty years ago, Isaac Asimov considered the issue in his collection of nine science-fiction stories, published as I, Robot in 1950. In this book, at the insistence of his editor, John W. Campbell Jr., Asimov proposed his now famous three laws of robotics.

  1. A robot may not injure a human being or through inaction allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except in cases where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Asimov, however, expressed doubts that the three laws would be sufficient to govern the morality of artificially intelligent systems. In fact he spent much of his time testing the boundaries of the three laws to detect where they might break down or create paradoxical or unanticipated behavior. He concluded that no set of laws could anticipate all circumstances. It turns out Asimov was correct.

To understand just how correct he was, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems farfetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com). Could we end up with a Terminator scenario (one in which machines attempt to exterminate the human race)? This issue is real, and researchers are addressing it to a limited extent. Some examples include:

  • In 2008 the president of the Association for the Advancement of Artificial Intelligence commissioned a study titled “AAAI Presidential Panel on Long-Term AI Futures.” Its main purpose was to address the aforementioned issue. AAAI’s interim report can be accessed at http://research.microsoft.com/en-us/um/people/horvitz/AAAI_Presidential_Panel_2008-2009.htm.
  • Popular science-fiction author Vernor Vinge suggests in his writings that the scenario of some computers becoming smarter than humans may be somewhat or possibly extremely dangerous for humans (Vernor Vinge, “The Coming Technological Singularity: How to Survive in the Post-Human Era,” Department of Mathematical Sciences, San Diego State University, 1993).
  • In 2009 academics and technical experts held a conference to discuss the hypothetical possibility that intelligent machines could become self-sufficient and able to make their own decisions (John Markoff, “Scientists Worry Machines May Outsmart Man,” The New York Times, July 26, 2009). They noted: 1)Some machines have acquired various forms of semiautonomy, including being able to find power sources and independently choose targets to attack with weapons. 2)Some computer viruses can evade elimination and have achieved “cockroach intelligence.”
  • The Singularity Institute for Artificial Intelligence stresses the need to build “friendly AI” (i.e., AI that is intrinsically friendly and humane). In this regard Ni ck Bostrom, a Swedish philosopher at St. Cross College at the University of Oxford, and Eliezer Yudkowsky, an American blogger, writer, and advocate for friendly artificial intelligence, have argued for decision trees over neural networks and genetic algorithms. They argue that decision trees obey modern social norms of transparency and predictability. Bostrom also published a paper, “Existential Risks,” in the Journal of Evolution and Technology that states artificial intelligence has the capability to bring about human extinction.
  • In 2009 authors Wendell Wallach and Colin Allen addressed the question of machine ethics in Moral Machines: Teaching Robots Right from Wrong (New York: Oxford University Press). In this book they brought greater attention to the controversial issue of which specific learning algorithms to use in machines.

While the above discussion indicates there is an awareness that SAMs may become hostile toward humans, no legislation or regulation has resulted. AI remains an unregulated branch of engineering, and the computer you buy eighteen months from now will be twice as capable as the one you can buy today.

Where does this leave us? We will address the key questions in the next post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

Is Strong Artificial Intelligence a New Life-Form? – Part 2/4

In our last post we raised questions regarding the ethics of technology, which is typically divided into two categories.

  1. Roboethics: This category focuses on the moral behavior of humans as they design, construct, use, and treat artificially intelligent beings.
  2. Machine ethics: This category focuses on the moral behavior of artificial moral agents (AMAs).

Let us start by discussing roboethics. In 2002 Italian engineer Gianmarco Veruggio coined the term “roboethics,” which refers to the morality of how humans design, construct, use, and treat robots and other artificially intelligent beings. Specifically it considers how AI may be used to benefit and/or harm humans. This raises the question of robot rights, namely what are the moral obligations of society toward its artificially intelligent machines? In many ways this question parallels the moral obligations of society toward animals. For computers with strong AI, this idea may even parallel the concept of human rights, such as the right to life, liberty, freedom of thought and expression, and even equality before the law.

How seriously should we take roboethics? At this point no intelligent machine completely emulates a human brain. Kurzweil, however, predicts that such a machine will exist by 2029. By some accounts he is a pessimist, as bioethicist Glenn McGee predicts that humanoid robots may appear by 2020. Although predictions regarding AI are often optimistic, Kurzweil, as mentioned, has been on target about 94 percent of the time. Therefore it is reasonable to believe that within a decade or two we will have machines that fully emulate a human brain. Based on this it is necessary to take the concepts of robot rights and the implications regarding giving robots rights seriously. In fact this is beginning to occur, and the issue of robot rights has been under consideration by the Institute for the Future and by the UK Department of Trade and Industry (“Robots Could Demand Legal rights,” BBC News, December 21, 2006).

At first the entire concept of robot rights may seem absurd. Since we do not have machines that emulate a human brain exactly, this possibility does not appear to be in our national consciousness. Let us fast-forward to 2029, however, and assume Kurzweil’s prediction is correct. Suddenly we have artificial intelligence that is on equal footing with human intelligence and appears to exhibit human emotion. Do we, as a nation, concede that we have created a new life-form? Do we grant robots rights under the law? The implications are serious; if we grant strong-AI robots rights equal to human rights, we may be giving up our right to control the singularity. Indeed robot rights eventually may override human rights.

Consider this scenario. As humans, we have inalienable rights, namely the right to life, liberty, and the pursuit of happiness (not all political systems agree with this). In the United States, the Bill of Rights protects our rights. If we give machines with strong AI the same rights, will we be able to control the intelligence explosion once each generation of strong-AI machines (SAMs) designs another generation with even greater intelligences? Will we have the right to control the machines? Will we be able to decide how the singularity unfolds?

We adopted animal rights to protect animals in circumstances in which they are unable to protect themselves. We saw this as humane and necessary. However, animal rights do not parallel human rights. In addition humankind reserves the right to exterminate any animal (such as the smallpox virus) that threatens humankind’s existence. Intelligent machines pose a threat that is similar and perhaps even more dangerous than extremely harmful pathogens (viruses and bacteria), which makes the entire issue of robot rights more important. If machines gain rights equal to those of humans, there is little doubt that eventually the intelligence of SAMs will eclipse that of humans. There would be no law that prevents this from happening. At that point will machines demand greater rights than humans? Will machines pose a threat to human rights? This brings us to another critical question: Which moral obligations (machine ethics) do intelligent machines have toward humankind?

We will address the above question in our next post.

Source: The Artificial Intelligence Revolution (2014), Louis Del Monte