In our last post we raised questions regarding the ethics of technology, which is typically divided into two categories.
- Roboethics: This category focuses on the moral behavior of humans as they design, construct, use, and treat artificially intelligent beings.
- Machine ethics: This category focuses on the moral behavior of artificial moral agents (AMAs).
Let us start by discussing roboethics. In 2002 Italian engineer Gianmarco Veruggio coined the term “roboethics,” which refers to the morality of how humans design, construct, use, and treat robots and other artificially intelligent beings. Specifically it considers how AI may be used to benefit and/or harm humans. This raises the question of robot rights, namely what are the moral obligations of society toward its artificially intelligent machines? In many ways this question parallels the moral obligations of society toward animals. For computers with strong AI, this idea may even parallel the concept of human rights, such as the right to life, liberty, freedom of thought and expression, and even equality before the law.
How seriously should we take roboethics? At this point no intelligent machine completely emulates a human brain. Kurzweil, however, predicts that such a machine will exist by 2029. By some accounts he is a pessimist, as bioethicist Glenn McGee predicts that humanoid robots may appear by 2020. Although predictions regarding AI are often optimistic, Kurzweil, as mentioned, has been on target about 94 percent of the time. Therefore it is reasonable to believe that within a decade or two we will have machines that fully emulate a human brain. Based on this it is necessary to take the concepts of robot rights and the implications regarding giving robots rights seriously. In fact this is beginning to occur, and the issue of robot rights has been under consideration by the Institute for the Future and by the UK Department of Trade and Industry (“Robots Could Demand Legal rights,” BBC News, December 21, 2006).
At first the entire concept of robot rights may seem absurd. Since we do not have machines that emulate a human brain exactly, this possibility does not appear to be in our national consciousness. Let us fast-forward to 2029, however, and assume Kurzweil’s prediction is correct. Suddenly we have artificial intelligence that is on equal footing with human intelligence and appears to exhibit human emotion. Do we, as a nation, concede that we have created a new life-form? Do we grant robots rights under the law? The implications are serious; if we grant strong-AI robots rights equal to human rights, we may be giving up our right to control the singularity. Indeed robot rights eventually may override human rights.
Consider this scenario. As humans, we have inalienable rights, namely the right to life, liberty, and the pursuit of happiness (not all political systems agree with this). In the United States, the Bill of Rights protects our rights. If we give machines with strong AI the same rights, will we be able to control the intelligence explosion once each generation of strong-AI machines (SAMs) designs another generation with even greater intelligences? Will we have the right to control the machines? Will we be able to decide how the singularity unfolds?
We adopted animal rights to protect animals in circumstances in which they are unable to protect themselves. We saw this as humane and necessary. However, animal rights do not parallel human rights. In addition humankind reserves the right to exterminate any animal (such as the smallpox virus) that threatens humankind’s existence. Intelligent machines pose a threat that is similar and perhaps even more dangerous than extremely harmful pathogens (viruses and bacteria), which makes the entire issue of robot rights more important. If machines gain rights equal to those of humans, there is little doubt that eventually the intelligence of SAMs will eclipse that of humans. There would be no law that prevents this from happening. At that point will machines demand greater rights than humans? Will machines pose a threat to human rights? This brings us to another critical question: Which moral obligations (machine ethics) do intelligent machines have toward humankind?
We will address the above question in our next post.
Source: The Artificial Intelligence Revolution (2014), Louis Del Monte
Should we go that far with artificial intelligence? What could be the ramifications of human life?
Hello Mr.Del Monte,
Very nice article.
Assuming strong AI is finally achieved (a big if to some people). What would they (the AIs) feel towards us. We would be their creators, their parents, their “gods”… however, our shortcomings, fragility and other weaknesses will be so obvious to them.
Will they just wipe us from existence? Will they help us grow? I think that one often overlooked possibility is to grow together, in a kind of “symbiotic” relation. Both species: strong AI entities and humans.
But strong AI is not the only possibility for the coming singularity. We, the human race, can turn into something new. A better being, a super-human, a post-singularity human. What that will be, is something that I cannot yet grasp, but it feels closer everyday. When we use tools like smartphones and tablets, and have instant access to almost all the collective knowledge of the human species, we are being able to do things that previous generations would have thought impossible. Sometimes the feeling of having access to all this information is overwhelming, and other times it feels so natural, so “right”, that it makes me wonder if I could ever go back to a previous state (disconnected). But it is not only the access to information and resources, it’s also the ability to coordinate efforts from people all over the world. It’s reshaping the way the world works.
I hope that if/when the singularity happens, the human beings can have a place on the new order of things. If that is the case, I really wish I can see the other side of the singularity…
I’ll keep reading your articles. Very interesting subject.
Best regards
Quoted from above…
“Consider this scenario. As humans, we have inalienable rights, namely the right to life, liberty, and the pursuit of happiness (not all political systems agree with this). In the United States, the Bill of Rights protects our rights. If we give machines with strong AI the same rights, will we be able to control the intelligence explosion once each generation of strong-AI machines (SAMs) designs another generation with even greater intelligences? Will we have the right to control the machines? Will we be able to decide how the singularity unfolds?”
That’s not a common belief… not on the planet, and sadly to say not within the U.S. itself despite the presence of these fine words on a seminal work of nation crafting. There is still a very large tendency to dehumanize those that stray from an accepted “normal”, or even to treat those of a given gender or age group as something not deserving of these “inalienable rights”. We’re still working out the kinks of treating HUMANS, humanely. We’re not even ready to consider the question of the rights of true non-humans we might not even be able to relate to, short of anthropomorphizing them.
Hi my family member! I wish to say that this post is awesome,
nice written and include approximately all important infos.
I’d like to peer more posts like this .
Simple questions. What would happen, if a strong AI watch the terminator movies? Would it not figure out the concepts of the Terminator’s desire to be a better choice, than the human race? We are considered as a plague upon the earth, a virus. Any intelligent beings could do better than humans.