Tag Archives: louis del monte

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

Is Strong Artificial Intelligence a New Life-Form? – Part 1/4

When an intelligent machine fully emulates the human brain in every regard (i.e., it possesses strong AI), should we consider it a new life-form?

The concept of artificial life (“A-life” for short) dates back to ancient myths and stories. Arguably the best known of these is Mary Shelley’s novel Frankenstein. In 1986 American computer scientist Christopher Langton, however, formally established the scientific discipline that studies A-life. The discipline of A-life recognizes three categories of artificial life (i.e., machines that imitate traditional biology by trying to re-create some aspects of biological phenomena).

  • Soft: from software-based simulation
  • Hard: from hardware-based simulations
  • Wet: from biochemistry simulations

For our purposes, I will focus only on the first two, since they apply to artificial intelligence as we commonly discuss it today. The category of “wet,” however, someday also may apply to artificial intelligence—if, for example, science is able to grow biological neural networks in the laboratory. In fact there is an entire scientific field known as synthetic biology, which combines biology and engineering to design and construct biological devices and systems for useful purposes. Synthetic biology currently is not being incorporated into AI simulations and is not likely to play a significant role in AI emulating a human brain. As synthetic biology and AI mature, however, they may eventually form a symbiotic relationship.

No current definition of life considers any A-life simulations to be alive in the traditional sense (i.e., constituting a part of the evolutionary process of any ecosystem). That view of life, however, is beginning to change as artificial intelligence comes closer to emulating a human brain. For example Hungarian-born American mathematician John von Neumann (1903–1957) asserted that “life is a process which can be abstracted away from any particular medium.” In particular this suggests that strong AI (artificial intelligence that completely emulates a human brain) could be considered a life-form, namely A-life.

This is not a new assertion. In the early 1990s, ecologist Thomas S. Ray asserted that his Tierra project (a computer simulation of artificial life) did not simulate life in a computer but synthesized it. This begs the following question: How do we define A-life?

The earliest description of A-life that comes close to a definition emerged from an official conference announcement in 1987 by Christopher Langton that was published subsequently in the 1989 book Artificial Life: The Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems.

Artificial life is the study of artificial systems that exhibit behavior characteristic of natural living systems. It is the quest to explain life in any of its possible manifestations, without restriction to the particular examples that have evolved on earth. This includes biological and chemical experiments, computer simulations, and purely theoretical endeavors. Processes occurring on molecular, social, and evolutionary scales are subject to investigation. The ultimate goal is to extract the logical form of living systems.

Kurzweil predicts that intelligent machines will have equal legal status with humans by 2099. As stated previously, his batting average regarding these types of predictions is about 94 percent. Therefore it is reasonable to believe that intelligent machines that emulate and exceed human intelligence eventually will be considered a life-form. In this and later chapters, however, I discuss the potential threats this poses to humankind. For example what will this mean in regard to the relationship between humans and intelligent machines? This question relates to the broader issue of the ethics of technology, which is typically divided into two categories.

  1. Roboethics: This category focuses on the moral behavior of humans as they design, construct, use, and treat artificially intelligent beings.
  2. Machine ethics: This category focuses on the moral behavior of artificial moral agents (AMAs).

We will discuss the above categories in the up coming posts, as we continue to address the question: “Is Strong AI a New Life-Form?”

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital face composed of binary code, symbolizing artificial intelligence and data processing in a blue-toned futuristic design.

Artificial Intelligence – The Rise of Intelligent Agents – Part 3/3 (Conclusion)

With access to electronic digital programmable computers in the mid-1950s, AI researchers began to focus on symbol manipulation (i.e., the manipulation of mathematical expressions, as is found in algebra) to emulate human intelligence. Three institutions led the charge: Carnegie Mellon University, Stanford University, and the Massachusetts Institute of Technology (MIT). Each university had its own style of research, which the American philosopher John Haugeland (1945–2010) named “good old-fashioned AI” or “GOFAI.”

From the 1960s through the 1970s, symbolic approaches achieved success at simulating high-level thinking in specific application programs. For example, in 1963, Danny Bobrow’s technical report from MIT’s AI group proved that computers could understand natural language well enough to solve algebra word problems correctly. The success of symbolic approaches added credence to the belief that symbolic approaches eventually would succeed in creating a machine with artificial general intelligence, also known as “strong AI,” equivalent to a human mind’s intelligence.

By the 1980s, however, symbolic approaches had run their course and fallen short of the goal of artificial general intelligence. Many AI researchers felt symbolic approaches never would emulate the processes of human cognition, such as perception, learning, and pattern recognition. The next step was a small retreat, and a new era of AI research termed “subsymbolic” emerged. Instead of attempting general AI, researchers turned their attention to solving smaller specific problems. For example researchers such as Australian computer scientist and former MIT Panasonic Professor of Robotics Rodney Brooks rejected symbolic AI. Instead he focused on solving engineering problems related to enabling robots to move.

In the 1990s, concurrent with subsymbolic approaches, AI researchers began to incorporate statistical approaches, again addressing specific problems. Statistical methodologies involve advanced mathematics and are truly scientific in that they are both measurable and verifiable. Statistical approaches proved to be a highly successful AI methodology. The advanced mathematics that underpin statistical AI enabled collaboration with more established fields, including mathematics, economics, and operations research. Computer scientists Stuart Russell and Peter Norvig describe this movement as the victory of the “neats” over the “scruffies,” two major opposing schools of AI research. Neats assert that AI solutions should be elegant, clear, and provable. Scruffies, on the other hand, assert that intelligence is too complicated to adhere to neat methodology.

From the 1990s to the present, despite the arguments between neats, scruffies, and other AI schools, some of AI’s greatest successes have been the result of combining approaches, which has resulted in what is known as the “intelligent agent.” The intelligent agent is a system that interacts with its environment and takes calculated actions (i.e., based on their success probability) to achieve its goal. The intelligent agent can be a simple system, such as a thermostat, or a complex system, similar conceptually to a human being. Intelligent agents also can be combined to form multiagent systems, similar conceptually to a large corporation, with a hierarchical control system to bridge lower-level subsymbolic AI systems to higher-level symbolic AI systems.

The intelligent-agent approach, including integration of intelligent agents to form a hierarchy of multiagents, places no restriction on the AI methodology employed to achieve the goal. Rather than arguing philosophy, the emphasis is on achieving results. The key to achieving the greatest results has proven to be integrating approaches, much like a symphonic orchestra integrates a variety of instruments to perform a symphony.

In the last seventy years, the approach to achieving AI has been more like that of a machine gun firing broadly in the direction of the target than a well-aimed rifle shot. In fits of starts and stops, numerous schools of AI research have pushed the technology forward. Starting with the loftiest goals of emulating a human mind, retreating to solving specific well-defined problems, and now again aiming toward artificial general intelligence, AI research is a near-perfect example of all human technology development, exemplifying trial-and-error learning, interrupted with spurts of genius.

Although AI has come a long way in the last seventy years and has been able to equal and exceed human intelligence in specific areas, such as playing chess, it still falls short of general human intelligence or strong AI. There are two significant problems associated with strong AI. First, we need a machine with processing power equal to that of a human brain. Second, we need programs that allow such a machine to emulate a human brain.

A humanoid robot with an extended hand under the text 'The Artificial Intelligence Revolution' questioning AI's role in serving or replacing humans.

Louis Del Monte Artificial Intelligence Revolution Radio Interview Recording – WFAS May 13, 2014

This is a recording of my radio interview with Brian Orlando on 1230 WFAS AM (serving the New York City area). The radio show is “Orlando In The Morning” and aired 8:03 EST May 13, 2014. Click the link below to listen:

Louis Del Monte Radio Interview on WFAS (New York City) 5-13-14 –

The interview was based on  my new book The Artificial Intelligence Revolution: Will Artificial Intelligence Serve Us or Replace Us? (Available on Amazon.com in both a paperback and Kindle edition)

Digital face composed of binary code, symbolizing artificial intelligence and data processing in a blue-toned futuristic design.

Artificial Intelligence – The Rise of Intelligent Agents – Part 2/3

In our last post, part 1, we stated two major questions still haunt AI research.

  1. Should AI simulate human intelligence, incorporating the sciences of psychology and neurology, or is human biology irrelevant?
  2. Can AI, simulating a human mind, be developed using simple principles, such as logic and mechanical reasoning, or does it require solving a large number of completely unrelated problems?

Why do the above questions still haunt AI? Let us take some examples.

  • Similar types of questions arose in other scientific fields. For example, in the early stages of aeronautics, engineers questioned whether flying machines should incorporate bird biology. Eventually bird biology proved to be a dead end and irrelevant to aeronautics.
  • When it comes to solving problems, humans rely heavily on our experience, and we augment it with reasoning. In business, for example, for every problem encountered, there are numerous solutions. The solution chosen is biased by the paradigms of those involved. If, for example, the problem is related to increasing the production of a product being manufactured, some managers may add more people to the work force, some may work at improving efficiency, and some may do both. I have long held the belief that for every problem we face in industry, there are at least ten solutions, and eight of them, although different, yield equivalent results. However, if you look at the previous example, you may be tempted to believe improving efficiency is a superior (i.e., more elegant) solution as opposed to increasing the work force. Improving efficiency, however, costs time and money. In many cases it is more expedient to increase the work force. My point is that humans approach solving a problem by using their accumulated life experiences, which may not even relate directly to the specific problem, and augment their life experiences with reasoning. Given the way human minds work, it is only natural to ask whether intelligent machines will have to approach problem solving in a similar way, namely by solving numerous unrelated problems as a path to the specific solution required.

Scientific work in AI dates back to the 1940s, long before the AI field had an official name. Early research in the 1940s and 1950s focused on attempting to simulate the human brain by using rudimentary cybernetics (i.e., control systems). Control systems use a two-step approach to controlling their environment.

  1. An action by the system generates some change in its environment.
  2. The system senses that change (i.e., feedback), which triggers the system to change in response.

A simple example of this type of control system is a thermostat. If you set it for a specific temperature, for example 72 degrees Fahrenheit, and the temperature drops below the set point, the thermostat will turn on the furnace. If the temperature increases above the set point, the thermostat will turn off the furnace. However, during the 1940s and 1950s, the entire area of brain simulation and cybernetics was a concept ahead of its time. While elements of these fields would survive, the approach of brain simulation and cybernetics was largely abandoned as access to computers became available in the mid-1950s.

In the next and concluding post, we will discuss the impact computer had on the development of artificial intelligence.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital face composed of binary code, symbolizing artificial intelligence and data processing in a blue-toned futuristic design.

Artificial Intelligence – The Rise of Intelligent Agents – Part 1/3

The road to intelligent machines has been difficult, filled with hairpin curves, steep hills, crevices, potholes, intersections, stop signs, and occasionally smooth and straight sections. The initial over-the-top optimism of AI founders John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon set unrealistic expectations. According to their predictions, by now every household should have its own humanoid robot to cook, clean, and do yard work and every other conceivable household task we humans perform.

During the course of my career, I have managed hundreds of scientists and engineers. In my experience they are, for the most part, overly optimistic as a group. When they say something was finished, it usually means it’s in the final stages of testing or inspection. When they say they will have a problem solved in a week, it usually means a month or more. Whatever schedules they give us—the management—we normally have to pad, sometimes doubling them, before we use the schedules to plan or before we give them to our clients. It is just part of their nature to be optimistic, believing the tasks associated with the goals will go without a hitch, or the solution to a problem will be just one experiment away. Often if you ask a simple question, you’ll receive the “theory of everything” as a reply. If the question relates to a problem, the answer will involve the history of humankind and fingers will be pointed in every direction. I am exaggerating slightly to make a point, but as humorous as this may sound, there is more than a kernel of truth in what I’ve stated.

This type of optimism accompanied the founding of AI. The founders dreamed with sugarplums in their heads, and we wanted to believe it. We wanted the world to be easier. We wanted intelligent machines to do the heavy lifting and drudgery of everyday chores. We did not have to envision it. The science-fiction writers of television series such as Star Trek envisioned it for us, and we wanted to believe that artificial life-forms, such as Lieutenant Commander Data on Star Trek: The Next Generation, were just a decade away. However, that is not what happened. The field of AI did not change the world overnight or even in a decade. Much like a ninja, it slowly and invisibly crept into our lives over the last half century, disguised behind “smart” applications.

After several starts and stops and two AI winters, AI researchers and engineers started to get it right. Instead of building a do-it-all intelligent machine, they focused on solving specific applications. To address the applications, researchers pursued various approaches for specific intelligent systems. After accomplishing that, they began to integrate the approaches, which brought us closer to artificial “general” intelligence, equal to human intelligence.

Many people not engaged in professional scientific research believe that scientists and engineers follow a strict orderly process, sometimes referred to as the “scientific method,” to develop and apply new technology. Let me dispel that paradigm. It is simply not true. In many cases a scientific field is approached via many different angles, and the approaches depend on the experience and paradigms of those involved. This is especially true in regard to AI research, as will soon become apparent.

The most important concept to understand is that no unifying theory guides AI research. Researchers disagree among themselves, and we have more questions than answers. Here are two major questions that still haunt AI research.

  1. Should AI simulate human intelligence, incorporating the sciences of psychology and neurology, or is human biology irrelevant?
  2. Can AI, simulating a human mind, be developed using simple principles, such as logic and mechanical reasoning, or does it require solving a large number of completely unrelated problems?

We will address these questions in the next post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monrw