Tag Archives: The Artificial Intelligence Revolution

A stylized blue and white vintage microphone with musical notes in the background.

“The Artificial Intelligence Revolution” Interview Featured On Blog Talk Radio

My interview on Johnny Tan’s program (From My Mama’s Kitchen®) is featured as one of “Today’s Best” on Blog Talk Radio’s home page. This is a great honor. Below is the player from our interview. It displays a slide show of my picture as well as the book cover while it plays the interview.

Discover Moms and Family Internet Radio with FMMK Talk Radio on BlogTalkRadio
Digital illustration of a human head with glowing neural connections representing brain activity and intelligence.

Will Strong Artificially Intelligent Machines Become Self-Conscious? Part 1/2

A generally accepted definition is that a person is conscious if that person is aware of his or her surroundings. If you are self-aware, it means you are self-conscious. In other words you are aware of yourself as an individual or of your own being, actions, and thoughts. To understand this concept, let us start by exploring how the human brain processes consciousness. To the best of our current understanding, no one part of the brain is responsible for consciousness. In fact neuroscience (the scientific study of the nervous system) hypothesizes that consciousness is the result of the interoperation of various parts of the brain called “neural correlates of consciousness” (NCC). This idea suggests that at this time we do not completely understand how the human brain processes consciousness or becomes self-aware.

Is it possible for a machine to be self-conscious? Obviously, since we do not completely understand how the human brain processes consciousness to become self-aware, it is difficult to definitively argue that a machine can become self-conscious or obtain what is termed “artificial consciousness” (AC). This is why AI experts differ on this subject. Some AI experts (proponents) argue it is possible to build a machine with AC that emulates the interoperation (i.e., it works like the human brain) of the NCC. Opponents argue that it is not possible because we do not fully understand the NCC. To my mind they are both correct. It is not possible today to build a machine with a level of AC that emulates the self-consciousness of the human brain. However, I believe that in the future we will understand the human brain’s NCC interoperation and build a machine that emulates it. Nevertheless this topic is hotly debated.

Opponents argue that many physical differences exist between natural, organic systems and artificially constructed (e.g., computer) systems that preclude AC. The most vocal critic who holds this view is American philosopher Ned Block (1942– ), who argues that a system with the same functional states as a human is not necessarily conscious.

The most vocal proponent who argues that AC is plausible is Australian philosopher David Chalmers (1966– ). In his unpublished 1993 manuscript “A Computational Foundation for the Study of Cognition,” Chalmers argues that it is possible for computers to perform the right kinds of computations that would result in a conscious mind. He reasons that computers perform computations that can capture other systems’ abstract causal organization. Mental properties are abstract causal organization. Therefore computers that run the right kind of computations will become conscious.

This is a good place for us to ask an important question: “How can we determine whether an intelligent machine has become conscious (self-aware)?” We will address this question in the next post (Part 2).

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Silhouette of a human head filled with interconnected gears representing thinking and mental processes.

How Do Computers Learn (Self-Learning Machines)?

How is it possible to wire together microprocessors, hard drives, memory chips, and numerous other electronic hardware components and create a machine that will teach itself to learn?

Let us start by defining machine learning. The most widely accepted definition comes from Tom M. Mitchell, a American computer scientist and E. Fredkin University Professor at Carnegie Mellon University. Here is his formal definition: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” In simple terms machine learning requires a machine to learn similar to the way humans do, namely from experience, and continue to improve its performance as it gains more experience.

Machine learning is a branch of AI; it utilizes algorithms that improve automatically through experience. Machine learning also has been a focus of AI research since the field’s inception. There are numerous computer software programs, known as machine-learning algorithms, that use various computational techniques to predict outcomes of new, unseen experiences. The algorithms’ performance is a branch of theoretical computer science known as “computational learning theory.” What this means in simple terms is that an intelligent machine has in its memory data that relates to a finite set of experiences. The machine-learning algorithms (i.e., software) access this data for its similarity to a new experience and use a specific algorithm (or combination of algorithms) to guide the machine to predict an outcome of this new experience. Since the experience data in the machine’s memory is limited, the algorithms are unable to predict outcomes with certainty. Instead they associate a probability to a specific outcome and act in accordance with the highest probability. Optical character recognition is an example of machine learning. In this case the computer recognizes printed characters based on previous examples. As anyone who has ever used an optical character-recognition program knows, however, the programs are far from 100 percent accurate. In my experience the best case is a little more than 95 percent accurate when the text is clear and uses a common font.

There are eleven major machine-learning algorithms and numerous variations of these algorithms. To study and understand each would be a formidable task. Fortunately, though, machine-learning algorithms fall into three major classifications. By understanding these classifications, along with representative examples of algorithms, we can gain significant insight into the science of machine learning. Therefore let us review the three major classifications and some representative examples.

  1. Supervised learning: This class of algorithms infers a function (a way of mapping or relating an input to an output) from training data, which consists of training examples. Each example consists of an input object and a desired output value. Ideally the inferred function (generalized from the training data) allows the algorithm to analyze new data (unseen instances/inputs) and map it to (i.e., predict) a high-probability output.
  2. Unsupervised learning: This class of algorithms seeks to find hidden structures (patterns in data) in a stream of input (unlabeled data). Unlike in supervised learning, the examples presented to the learner are unlabeled, which makes it impossible to assign an error or reward to a potential solution.
  3. Reinforcement learning: Reinforcement learning was inspired by behaviorist psychology. It focuses on which actions an agent (an intelligent machine) should take to maximize a reward (for example a numerical value associated with utility). In effect the agent receives rewards for good responses and punishment for bad ones. The algorithms for reinforcement learning require the agent to take discrete time steps and calculate the reward as a function of having taken that step. At this point the agent takes another time step and again calculates the reward, which provides feedback to guide the agent’s next action. The agent’s goal is to collect as much reward as possible.

Machine learning is similar to human learning in many respects. The most difficult issue in machine learning is generalization or what is often referred to as abstraction. This is simply the ability to determine the features and structures of an object (i.e., data) relevant to solving the problem. Humans are excellent when it comes to abstracting the essence of an object. For example, regardless of the breed or type of dog, whether we see a small, large, multicolor, long-hair, short-hair, large-nose, or short-nose animal, we immediately recognize that the animal is a dog. Most four-year-old children immediately recognize dogs. However, most intelligent agents have a difficult time with generalization and require sophisticated computer programs to enable them to generalize.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital illustration of a human head with a microchip embedded in the forehead, symbolizing AI or brain-computer interface technology.

When Will A Computer Equal a Human Brain?

If we want to view the human brain in terms of a computer, one approach would be to take the number of calculations per second that an average human brain is able to process and compare that with today’s best computers. This is not an exact science. No one really knows how many calculations per second an average human brain is able to process, but some estimates (www.rawstory.com/rs/2012/06/18/earths-supercomputing-power-surpasses-human-brain-three-times-over) suggest it is in the order of 36.8 petaflops of data (a petaflop is equal to one quadrillion calculations per second). Let us compare the human brain’s processing power with the best current computers on record, listed below by year and processing-power achievement.

  • June 18, 2012: IBM’s Sequoia supercomputer system, based at the US Lawrence Livermore National Laboratory (LLNL), reached sixteen petaflops, setting the world record and claiming first place in the latest TOP500 list (a list of the top five hundred computers ranked by a benchmark known as LINPACK (related to their ability to solve a set of linear equations) to decide whether they qualify for the TOP500.
  • November 12, 2012: The TOP500 list certified Titan as the world’s fastest supercomputer per the LINPACK benchmark, at 17.59 petaflops. Cray Incorporated, at the Oak Ridge National Laboratory, developed it.
  • June 10, 2013: China’s Tianhe-2 was ranked the world’s fastest supercomputer, with a record of 33.86 petaflops.

Using Moore’s law (i.e., computer processing power doubles every eighteen months), we can extrapolate that in terms of raw processing power (petaflops), computer processing power will meet or exceed that of the human mind by about 2015 to 2017. This does not mean that by 2017 we will have a computer that is equal to the human mind. Software plays a key role in both processing power (MIPS) and AI.

To understand the critical role that software plays, we must understand what we are asking AI to accomplish in emulating human intelligence. Here is a thumbnail sketch of the capabilities that researchers consider necessary.

  • Reasoning: step-by-step reasoning that humans use to solve problems or make logical decisions
  • Knowledge: extensive knowledge, similar to what an educated human would possess
  • Planning: the ability to set goals and achieve them
  • Learning: the ability to acquire knowledge through experience and use that knowledge to improve
  • Language: the ability to understand the languages humans speak and write
  • Moving: the ability to move and navigate, including knowing where it is relative to other objects and obstacles
  • Manipulation: the ability to secure and handle an object
  • Vision: the ability to analyze visual input, including facial and object recognition
  • Social intelligence: the ability to recognize, interpret, and process human psychology and emotions and respond appropriately
  • Creativity: the ability to generate outputs that can be considered creative or the ability to identify and assess creativity

This list makes clear that raw computer processing and sensing are only two elements in emulating the human mind. Obviously software is also a critical element. Each of the capabilities delineated above requires a computer program. To emulate a human mind, the computer programs would need to act both independently and interactively, depending on the specific circumstance.

In terms of raw computer processing, with the development of China’s Tianhe-2 computer, we are on the threshold of having a computer with the raw processing power of a human mind. The development of a computer that will emulate a human mind, however, may still be one, two, or even more decades away, due to software and sensing requirements.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

Can an Artificially Intelligent Machine Have Human-like Emotions?

Affective computing is a relatively new science. It is the science of programming computers to recognize, interpret, process, and simulate human affects. The word “affects” refers to the experience or display of feelings or emotions.

While AI has achieved superhuman status in playing chess and quiz-show games, it does not have the emotional equivalence of a four-year-old child. For example a four-year-old may love to play with toys. The child laughs with delight as the toy performs some function, such as a toy cat meowing when it is squeezed. If you take the toy away from the child, the child may become sad and cry. Computers are unable to achieve any emotional response similar to that of a four-year-old child. Computers do not exhibit joy or sadness. Some researchers believe this is actually a good thing. The intelligent machine processes and acts on information without coloring it with emotions. When you go to an ATM, you will not have to argue with the ATM regarding whether you can afford to make a withdrawal, and a robotic assistant will not lose its temper if you do not thank it after it performs a service. Highly meaningful human interactions with intelligent machines, however, will require that machines simulate human affects, such as empathy. In fact some researchers argue that machines should be able to interpret the emotional state of humans and adapt their behavior accordingly, giving appropriate responses for those emotions. For example if you are in a state of panic because your spouse is apparently having a heart attack, when you ask the machine to call for medical assistance, it should understand the urgency. In addition it will be impossible for an intelligent machine to be truly equal to a human brain without the machine possessing human affects. For example how could an artificial human brain write a romance novel without understanding love, hate, and jealousy?

Progress concerning the development of computers with human affects has been slow. In fact this particular computer science originated with Rosalind Picard’s 1995 paper on affective computing (“Affective Computing,” MIT Technical Report #321, abstract, 1995). The single greatest problem involved in developing and programming computers to emulate the emotions of the human brain is that we do not fully understand how emotions are processed in the human brain. We are unable to pinpoint a specific area of the brain and scientifically argue that it is responsible for specific human emotions, which has raised questions. Are human emotions byproducts of human intelligence? Are they the result of distributed functions within the human brain? Are they learned, or are we born with them? There is no universal agreement regarding the answers to these questions. Nonetheless work on studying human affects and developing affective computing is continuing.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Side profile of a futuristic humanoid robot with a white face and visible mechanical components against a pale blue background.

Is Strong Artificial Intelligence a New Life-Form? – Part 4/4 (Conclusion)

In our previous posts, we discussed that there is an awareness that SAMs (i.e., strong artificially intelligent machines) may become hostile toward humans, and AI remains an unregulated branch of engineering. The computer you buy eighteen months from now will be twice as capable as the one you can buy today.

Where does this leave us regarding the following questions?

  • Is strong AI a new life-form?
  • Should we afford these machines “robot” rights?

In his 1990 book The Age of Intelligent Machines, Kurzweil predicted that in 2099 organic humans will be protected from extermination and respected by strong AI, regardless of their shortcomings and frailties, because they gave rise to the machines. To my mind the possibility of this scenario eventually playing out is questionable. Although I believe a case can be made that strong AI is a new life-form, we need to be extremely careful with regard to granting SAMs rights, especially rights similar to those possessed by human. Anthony Berglas expresses it best in his 2008 book Artificial Intelligence Will Kill Our Grandchildren, in which he notes:

  • There is no evolutionary motivation for AI to be friendly to humans.
  • AI would have its own evolutionary pressures (i.e., competing with other AIs for computer hardware and energy).
  • Humankind would find it difficult to survive a competition with more intelligent machines.

Based on the above, carefully consider the following question. Should SAMs be granted machine rights? Perhaps in a limited sense, but we must maintain the right to shut down the machine as well as limit its intelligence. If our evolutionary path is to become cyborgs, this is a step we should take only after understanding the full implications. We need to decide when (under which circumstances), how, and how quickly we take this step. We must control the singularity, or it will control us. Time is short because the singularity is approaching with the stealth and agility of a leopard stalking a lamb, and for the singularity, the lamb is humankind.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A metallic skull with glowing red eyes and wires attached, set against a black background.

Is Strong Artificial Intelligence a New Life-Form? – Part 3/4

Can we expect an artificially intelligent machine to behave ethically? There is a field of research that addresses this question, namely machine ethics. This field focuses on designing artificial moral agents (AMAs), robots, or artificially intelligent computers that behave morally. This thrust is not new. More than sixty years ago, Isaac Asimov considered the issue in his collection of nine science-fiction stories, published as I, Robot in 1950. In this book, at the insistence of his editor, John W. Campbell Jr., Asimov proposed his now famous three laws of robotics.

  1. A robot may not injure a human being or through inaction allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except in cases where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Asimov, however, expressed doubts that the three laws would be sufficient to govern the morality of artificially intelligent systems. In fact he spent much of his time testing the boundaries of the three laws to detect where they might break down or create paradoxical or unanticipated behavior. He concluded that no set of laws could anticipate all circumstances. It turns out Asimov was correct.

To understand just how correct he was, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems farfetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com). Could we end up with a Terminator scenario (one in which machines attempt to exterminate the human race)? This issue is real, and researchers are addressing it to a limited extent. Some examples include:

  • In 2008 the president of the Association for the Advancement of Artificial Intelligence commissioned a study titled “AAAI Presidential Panel on Long-Term AI Futures.” Its main purpose was to address the aforementioned issue. AAAI’s interim report can be accessed at http://research.microsoft.com/en-us/um/people/horvitz/AAAI_Presidential_Panel_2008-2009.htm.
  • Popular science-fiction author Vernor Vinge suggests in his writings that the scenario of some computers becoming smarter than humans may be somewhat or possibly extremely dangerous for humans (Vernor Vinge, “The Coming Technological Singularity: How to Survive in the Post-Human Era,” Department of Mathematical Sciences, San Diego State University, 1993).
  • In 2009 academics and technical experts held a conference to discuss the hypothetical possibility that intelligent machines could become self-sufficient and able to make their own decisions (John Markoff, “Scientists Worry Machines May Outsmart Man,” The New York Times, July 26, 2009). They noted: 1)Some machines have acquired various forms of semiautonomy, including being able to find power sources and independently choose targets to attack with weapons. 2)Some computer viruses can evade elimination and have achieved “cockroach intelligence.”
  • The Singularity Institute for Artificial Intelligence stresses the need to build “friendly AI” (i.e., AI that is intrinsically friendly and humane). In this regard Ni ck Bostrom, a Swedish philosopher at St. Cross College at the University of Oxford, and Eliezer Yudkowsky, an American blogger, writer, and advocate for friendly artificial intelligence, have argued for decision trees over neural networks and genetic algorithms. They argue that decision trees obey modern social norms of transparency and predictability. Bostrom also published a paper, “Existential Risks,” in the Journal of Evolution and Technology that states artificial intelligence has the capability to bring about human extinction.
  • In 2009 authors Wendell Wallach and Colin Allen addressed the question of machine ethics in Moral Machines: Teaching Robots Right from Wrong (New York: Oxford University Press). In this book they brought greater attention to the controversial issue of which specific learning algorithms to use in machines.

While the above discussion indicates there is an awareness that SAMs may become hostile toward humans, no legislation or regulation has resulted. AI remains an unregulated branch of engineering, and the computer you buy eighteen months from now will be twice as capable as the one you can buy today.

Where does this leave us? We will address the key questions in the next post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

Is Strong Artificial Intelligence a New Life-Form? – Part 2/4

In our last post we raised questions regarding the ethics of technology, which is typically divided into two categories.

  1. Roboethics: This category focuses on the moral behavior of humans as they design, construct, use, and treat artificially intelligent beings.
  2. Machine ethics: This category focuses on the moral behavior of artificial moral agents (AMAs).

Let us start by discussing roboethics. In 2002 Italian engineer Gianmarco Veruggio coined the term “roboethics,” which refers to the morality of how humans design, construct, use, and treat robots and other artificially intelligent beings. Specifically it considers how AI may be used to benefit and/or harm humans. This raises the question of robot rights, namely what are the moral obligations of society toward its artificially intelligent machines? In many ways this question parallels the moral obligations of society toward animals. For computers with strong AI, this idea may even parallel the concept of human rights, such as the right to life, liberty, freedom of thought and expression, and even equality before the law.

How seriously should we take roboethics? At this point no intelligent machine completely emulates a human brain. Kurzweil, however, predicts that such a machine will exist by 2029. By some accounts he is a pessimist, as bioethicist Glenn McGee predicts that humanoid robots may appear by 2020. Although predictions regarding AI are often optimistic, Kurzweil, as mentioned, has been on target about 94 percent of the time. Therefore it is reasonable to believe that within a decade or two we will have machines that fully emulate a human brain. Based on this it is necessary to take the concepts of robot rights and the implications regarding giving robots rights seriously. In fact this is beginning to occur, and the issue of robot rights has been under consideration by the Institute for the Future and by the UK Department of Trade and Industry (“Robots Could Demand Legal rights,” BBC News, December 21, 2006).

At first the entire concept of robot rights may seem absurd. Since we do not have machines that emulate a human brain exactly, this possibility does not appear to be in our national consciousness. Let us fast-forward to 2029, however, and assume Kurzweil’s prediction is correct. Suddenly we have artificial intelligence that is on equal footing with human intelligence and appears to exhibit human emotion. Do we, as a nation, concede that we have created a new life-form? Do we grant robots rights under the law? The implications are serious; if we grant strong-AI robots rights equal to human rights, we may be giving up our right to control the singularity. Indeed robot rights eventually may override human rights.

Consider this scenario. As humans, we have inalienable rights, namely the right to life, liberty, and the pursuit of happiness (not all political systems agree with this). In the United States, the Bill of Rights protects our rights. If we give machines with strong AI the same rights, will we be able to control the intelligence explosion once each generation of strong-AI machines (SAMs) designs another generation with even greater intelligences? Will we have the right to control the machines? Will we be able to decide how the singularity unfolds?

We adopted animal rights to protect animals in circumstances in which they are unable to protect themselves. We saw this as humane and necessary. However, animal rights do not parallel human rights. In addition humankind reserves the right to exterminate any animal (such as the smallpox virus) that threatens humankind’s existence. Intelligent machines pose a threat that is similar and perhaps even more dangerous than extremely harmful pathogens (viruses and bacteria), which makes the entire issue of robot rights more important. If machines gain rights equal to those of humans, there is little doubt that eventually the intelligence of SAMs will eclipse that of humans. There would be no law that prevents this from happening. At that point will machines demand greater rights than humans? Will machines pose a threat to human rights? This brings us to another critical question: Which moral obligations (machine ethics) do intelligent machines have toward humankind?

We will address the above question in our next post.

Source: The Artificial Intelligence Revolution (2014), Louis Del Monte

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

Is Strong Artificial Intelligence a New Life-Form? – Part 1/4

When an intelligent machine fully emulates the human brain in every regard (i.e., it possesses strong AI), should we consider it a new life-form?

The concept of artificial life (“A-life” for short) dates back to ancient myths and stories. Arguably the best known of these is Mary Shelley’s novel Frankenstein. In 1986 American computer scientist Christopher Langton, however, formally established the scientific discipline that studies A-life. The discipline of A-life recognizes three categories of artificial life (i.e., machines that imitate traditional biology by trying to re-create some aspects of biological phenomena).

  • Soft: from software-based simulation
  • Hard: from hardware-based simulations
  • Wet: from biochemistry simulations

For our purposes, I will focus only on the first two, since they apply to artificial intelligence as we commonly discuss it today. The category of “wet,” however, someday also may apply to artificial intelligence—if, for example, science is able to grow biological neural networks in the laboratory. In fact there is an entire scientific field known as synthetic biology, which combines biology and engineering to design and construct biological devices and systems for useful purposes. Synthetic biology currently is not being incorporated into AI simulations and is not likely to play a significant role in AI emulating a human brain. As synthetic biology and AI mature, however, they may eventually form a symbiotic relationship.

No current definition of life considers any A-life simulations to be alive in the traditional sense (i.e., constituting a part of the evolutionary process of any ecosystem). That view of life, however, is beginning to change as artificial intelligence comes closer to emulating a human brain. For example Hungarian-born American mathematician John von Neumann (1903–1957) asserted that “life is a process which can be abstracted away from any particular medium.” In particular this suggests that strong AI (artificial intelligence that completely emulates a human brain) could be considered a life-form, namely A-life.

This is not a new assertion. In the early 1990s, ecologist Thomas S. Ray asserted that his Tierra project (a computer simulation of artificial life) did not simulate life in a computer but synthesized it. This begs the following question: How do we define A-life?

The earliest description of A-life that comes close to a definition emerged from an official conference announcement in 1987 by Christopher Langton that was published subsequently in the 1989 book Artificial Life: The Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems.

Artificial life is the study of artificial systems that exhibit behavior characteristic of natural living systems. It is the quest to explain life in any of its possible manifestations, without restriction to the particular examples that have evolved on earth. This includes biological and chemical experiments, computer simulations, and purely theoretical endeavors. Processes occurring on molecular, social, and evolutionary scales are subject to investigation. The ultimate goal is to extract the logical form of living systems.

Kurzweil predicts that intelligent machines will have equal legal status with humans by 2099. As stated previously, his batting average regarding these types of predictions is about 94 percent. Therefore it is reasonable to believe that intelligent machines that emulate and exceed human intelligence eventually will be considered a life-form. In this and later chapters, however, I discuss the potential threats this poses to humankind. For example what will this mean in regard to the relationship between humans and intelligent machines? This question relates to the broader issue of the ethics of technology, which is typically divided into two categories.

  1. Roboethics: This category focuses on the moral behavior of humans as they design, construct, use, and treat artificially intelligent beings.
  2. Machine ethics: This category focuses on the moral behavior of artificial moral agents (AMAs).

We will discuss the above categories in the up coming posts, as we continue to address the question: “Is Strong AI a New Life-Form?”

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital face composed of binary code, symbolizing artificial intelligence and data processing in a blue-toned futuristic design.

Artificial Intelligence – The Rise of Intelligent Agents – Part 3/3 (Conclusion)

With access to electronic digital programmable computers in the mid-1950s, AI researchers began to focus on symbol manipulation (i.e., the manipulation of mathematical expressions, as is found in algebra) to emulate human intelligence. Three institutions led the charge: Carnegie Mellon University, Stanford University, and the Massachusetts Institute of Technology (MIT). Each university had its own style of research, which the American philosopher John Haugeland (1945–2010) named “good old-fashioned AI” or “GOFAI.”

From the 1960s through the 1970s, symbolic approaches achieved success at simulating high-level thinking in specific application programs. For example, in 1963, Danny Bobrow’s technical report from MIT’s AI group proved that computers could understand natural language well enough to solve algebra word problems correctly. The success of symbolic approaches added credence to the belief that symbolic approaches eventually would succeed in creating a machine with artificial general intelligence, also known as “strong AI,” equivalent to a human mind’s intelligence.

By the 1980s, however, symbolic approaches had run their course and fallen short of the goal of artificial general intelligence. Many AI researchers felt symbolic approaches never would emulate the processes of human cognition, such as perception, learning, and pattern recognition. The next step was a small retreat, and a new era of AI research termed “subsymbolic” emerged. Instead of attempting general AI, researchers turned their attention to solving smaller specific problems. For example researchers such as Australian computer scientist and former MIT Panasonic Professor of Robotics Rodney Brooks rejected symbolic AI. Instead he focused on solving engineering problems related to enabling robots to move.

In the 1990s, concurrent with subsymbolic approaches, AI researchers began to incorporate statistical approaches, again addressing specific problems. Statistical methodologies involve advanced mathematics and are truly scientific in that they are both measurable and verifiable. Statistical approaches proved to be a highly successful AI methodology. The advanced mathematics that underpin statistical AI enabled collaboration with more established fields, including mathematics, economics, and operations research. Computer scientists Stuart Russell and Peter Norvig describe this movement as the victory of the “neats” over the “scruffies,” two major opposing schools of AI research. Neats assert that AI solutions should be elegant, clear, and provable. Scruffies, on the other hand, assert that intelligence is too complicated to adhere to neat methodology.

From the 1990s to the present, despite the arguments between neats, scruffies, and other AI schools, some of AI’s greatest successes have been the result of combining approaches, which has resulted in what is known as the “intelligent agent.” The intelligent agent is a system that interacts with its environment and takes calculated actions (i.e., based on their success probability) to achieve its goal. The intelligent agent can be a simple system, such as a thermostat, or a complex system, similar conceptually to a human being. Intelligent agents also can be combined to form multiagent systems, similar conceptually to a large corporation, with a hierarchical control system to bridge lower-level subsymbolic AI systems to higher-level symbolic AI systems.

The intelligent-agent approach, including integration of intelligent agents to form a hierarchy of multiagents, places no restriction on the AI methodology employed to achieve the goal. Rather than arguing philosophy, the emphasis is on achieving results. The key to achieving the greatest results has proven to be integrating approaches, much like a symphonic orchestra integrates a variety of instruments to perform a symphony.

In the last seventy years, the approach to achieving AI has been more like that of a machine gun firing broadly in the direction of the target than a well-aimed rifle shot. In fits of starts and stops, numerous schools of AI research have pushed the technology forward. Starting with the loftiest goals of emulating a human mind, retreating to solving specific well-defined problems, and now again aiming toward artificial general intelligence, AI research is a near-perfect example of all human technology development, exemplifying trial-and-error learning, interrupted with spurts of genius.

Although AI has come a long way in the last seventy years and has been able to equal and exceed human intelligence in specific areas, such as playing chess, it still falls short of general human intelligence or strong AI. There are two significant problems associated with strong AI. First, we need a machine with processing power equal to that of a human brain. Second, we need programs that allow such a machine to emulate a human brain.