In our last post we raised questions regarding the ethics of technology, which is typically divided into two categories.

  1. Roboethics: This category focuses on the moral behavior of humans as they design, construct, use, and treat artificially intelligent beings.
  2. Machine ethics: This category focuses on the moral behavior of artificial moral agents (AMAs).

Let us start by discussing roboethics. In 2002 Italian engineer Gianmarco Veruggio coined the term “roboethics,” which refers to the morality of how humans design, construct, use, and treat robots and other artificially intelligent beings. Specifically it considers how AI may be used to benefit and/or harm humans. This raises the question of robot rights, namely what are the moral obligations of society toward its artificially intelligent machines? In many ways this question parallels the moral obligations of society toward animals. For computers with strong AI, this idea may even parallel the concept of human rights, such as the right to life, liberty, freedom of thought and expression, and even equality before the law.

How seriously should we take roboethics? At this point no intelligent machine completely emulates a human brain. Kurzweil, however, predicts that such a machine will exist by 2029. By some accounts he is a pessimist, as bioethicist Glenn McGee predicts that humanoid robots may appear by 2020. Although predictions regarding AI are often optimistic, Kurzweil, as mentioned, has been on target about 94 percent of the time. Therefore it is reasonable to believe that within a decade or two we will have machines that fully emulate a human brain. Based on this it is necessary to take the concepts of robot rights and the implications regarding giving robots rights seriously. In fact this is beginning to occur, and the issue of robot rights has been under consideration by the Institute for the Future and by the UK Department of Trade and Industry (“Robots Could Demand Legal rights,” BBC News, December 21, 2006).

At first the entire concept of robot rights may seem absurd. Since we do not have machines that emulate a human brain exactly, this possibility does not appear to be in our national consciousness. Let us fast-forward to 2029, however, and assume Kurzweil’s prediction is correct. Suddenly we have artificial intelligence that is on equal footing with human intelligence and appears to exhibit human emotion. Do we, as a nation, concede that we have created a new life-form? Do we grant robots rights under the law? The implications are serious; if we grant strong-AI robots rights equal to human rights, we may be giving up our right to control the singularity. Indeed robot rights eventually may override human rights.

Consider this scenario. As humans, we have inalienable rights, namely the right to life, liberty, and the pursuit of happiness (not all political systems agree with this). In the United States, the Bill of Rights protects our rights. If we give machines with strong AI the same rights, will we be able to control the intelligence explosion once each generation of strong-AI machines (SAMs) designs another generation with even greater intelligences? Will we have the right to control the machines? Will we be able to decide how the singularity unfolds?

We adopted animal rights to protect animals in circumstances in which they are unable to protect themselves. We saw this as humane and necessary. However, animal rights do not parallel human rights. In addition humankind reserves the right to exterminate any animal (such as the smallpox virus) that threatens humankind’s existence. Intelligent machines pose a threat that is similar and perhaps even more dangerous than extremely harmful pathogens (viruses and bacteria), which makes the entire issue of robot rights more important. If machines gain rights equal to those of humans, there is little doubt that eventually the intelligence of SAMs will eclipse that of humans. There would be no law that prevents this from happening. At that point will machines demand greater rights than humans? Will machines pose a threat to human rights? This brings us to another critical question: Which moral obligations (machine ethics) do intelligent machines have toward humankind?

We will address the above question in our next post.

Source: The Artificial Intelligence Revolution (2014), Louis Del Monte