Category Archives: Artificial Intelligence

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Artificial Intelligence Gives Rise to Intelligent Agents – Part 3/3 (Conclusion)

In conclusion, let’s discuss the approaches that researchers pursued using electronic digital programmable computers.

From the 1960s through the 1970s, symbolic approaches achieved success at simulating high-level thinking in specific application programs. For example, in 1963, Danny Bobrow’s technical report from MIT’s AI group proved that computers could understand natural language well enough to solve algebra word problems correctly. The success of symbolic approaches added credence to the belief that symbolic approaches eventually would succeed in creating a machine with artificial general intelligence, also known as “strong AI,” equivalent to a human mind’s intelligence.

By the 1980s, however, symbolic approaches had run their course and fallen short of the goal of artificial general intelligence. Many AI researchers felt symbolic approaches never would emulate the processes of human cognition, such as perception, learning, and pattern recognition. The next step was a small retreat, and a new era of AI research termed “subsymbolic” emerged. Instead of attempting general AI, researchers turned their attention to solving smaller specific problems. For example researchers such as Australian computer scientist and former MIT Panasonic Professor of Robotics Rodney Brooks rejected symbolic AI. Instead he focused on solving engineering problems related to enabling robots to move.

In the 1990s, concurrent with subsymbolic approaches, AI researchers began to incorporate statistical approaches, again addressing specific problems. Statistical methodologies involve advanced mathematics and are truly scientific in that they are both measurable and verifiable. Statistical approaches proved to be a highly successful AI methodology. The advanced mathematics that underpin statistical AI enabled collaboration with more established fields, including mathematics, economics, and operations research. Computer scientists Stuart Russell and Peter Norvig describe this movement as the victory of the “neats” over the “scruffies,” two major opposing schools of AI research. Neats assert that AI solutions should be elegant, clear, and provable. Scruffies, on the other hand, assert that intelligence is too complicated to adhere to neat methodology.

From the 1990s to the present, despite the arguments between neats, scruffies, and other AI schools, some of AI’s greatest successes have been the result of combining approaches, which has resulted in what is known as the “intelligent agent.” The intelligent agent is a system that interacts with its environment and takes calculated actions (i.e., based on their success probability) to achieve its goal. The intelligent agent can be a simple system, such as a thermostat, or a complex system, similar conceptually to a human being. Intelligent agents also can be combined to form multiagent systems, similar conceptually to a large corporation, with a hierarchical control system to bridge lower-level subsymbolic AI systems to higher-level symbolic AI systems.

The intelligent-agent approach, including integration of intelligent agents to form a hierarchy of multiagents, places no restriction on the AI methodology employed to achieve the goal. Rather than arguing philosophy, the emphasis is on achieving results. The key to achieving the greatest results has proven to be integrating approaches, much like a symphonic orchestra integrates a variety of instruments to perform a symphony.

In the last seventy years, the approach to achieving AI has been more like that of a machine gun firing broadly in the direction of the target than a well-aimed rifle shot. In fits of starts and stops, numerous schools of AI research have pushed the technology forward. Starting with the loftiest goals of emulating a human mind, retreating to solving specific well-defined problems, and now again aiming toward artificial general intelligence, AI research is a near-perfect example of all human technology development, exemplifying trial-and-error learning, interrupted with spurts of genius.

Although AI has come a long way in the last seventy years and has been able to equal and exceed human intelligence in specific areas, such as playing chess, it still falls short of general human intelligence or strong AI. There are two significant problems associated with strong AI. First, we need a machine with processing power equal to that of a human brain. Second, we need programs that allow such a machine to emulate a human brain.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Artificial Intelligence Gives Rise to Intelligent Agents – Part 2/3

In the last post (Part 1/3), we made the point that no unifying theory guides AI research. Researchers disagree among themselves, and we have more questions than answers. Here are two major questions that still haunt AI research.

  1. Should AI simulate human intelligence, incorporating the sciences of psychology and neurology, or is human biology irrelevant?
  2. Can AI, simulating a human mind, be developed using simple principles, such as logic and mechanical reasoning, or does it require solving a large number of completely unrelated problems?

Why do the above questions still haunt AI? Let us take some examples.

  • Similar types of questions arose in other scientific fields. For example, in the early stages of aeronautics, engineers questioned whether flying machines should incorporate bird biology. Eventually bird biology proved to be a dead end and irrelevant to aeronautics.
  • When it comes to solving problems, humans rely heavily on our experience, and we augment it with reasoning. In business, for example, for every problem encountered, there are numerous solutions. The solution chosen is biased by the paradigms of those involved. If, for example, the problem is related to increasing the production of a product being manufactured, some managers may add more people to the work force, some may work at improving efficiency, and some may do both. I have long held the belief that for every problem we face in industry, there are at least ten solutions, and eight of them, although different, yield equivalent results. However, if you look at the previous example, you may be tempted to believe improving efficiency is a superior (i.e., more elegant) solution as opposed to increasing the work force. Improving efficiency, however, costs time and money. In many cases it is more expedient to increase the work force. My point is that humans approach solving a problem by using their accumulated life experiences, which may not even relate directly to the specific problem, and augment their life experiences with reasoning. Given the way human minds work, it is only natural to ask whether intelligent machines will have to approach problem solving in a similar way, namely by solving numerous unrelated problems as a path to the specific solution required.

Scientific work in AI dates back to the 1940s, long before the AI field had an official name. Early research in the 1940s and 1950s focused on attempting to simulate the human brain by using rudimentary cybernetics (i.e., control systems). Control systems use a two-step approach to controlling their environment.

    1. An action by the system generates some change in its environment.
    2. The system senses that change (i.e., feedback), which triggers the system to change in response.

A simple example of this type of control system is a thermostat. If you set it for a specific temperature, for example 72 degrees Fahrenheit, and the temperature drops below the set point, the thermostat will turn on the furnace. If the temperature increases above the set point, the thermostat will turn off the furnace. However, during the 1940s and 1950s, the entire area of brain simulation and cybernetics was a concept ahead of its time. While elements of these fields would survive, the approach of brain simulation and cybernetics was largely abandoned as access to computers became available in the mid-1950s.

With access to electronic digital programmable computers in the mid-1950s, AI researchers began to focus on symbol manipulation (i.e., the manipulation of mathematical expressions, as is found in algebra) to emulate human intelligence. Three institutions led the charge: Carnegie Mellon University, Stanford University, and the Massachusetts Institute of Technology (MIT). Each university had its own style of research, which the American philosopher John Haugeland (1945–2010) named “good old-fashioned AI” or “GOFAI.”

In the conclusion to this article (Part 3/3), we will discuss the approaches that researchers pursued using electronic digital programmable computers.

Source: The Artificial Intelligent Revolution (2014), Louis A. Del Monte

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Artificial Intelligence Gives Rise to Intelligent Agents – Part 1/3

The road to intelligent machines has been difficult, filled with hairpin curves, steep hills, crevices, potholes, intersections, stop signs, and occasionally smooth and straight sections. The initial over-the-top optimism of AI founders John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon set unrealistic expectations. According to their predictions, by now every household should have its own humanoid robot to cook, clean, and do yard work and every other conceivable household task we humans perform.

During the course of my career, I have managed hundreds of scientists and engineers. In my experience they are, for the most part, overly optimistic as a group. When they say something was finished, it usually means it’s in the final stages of testing or inspection. When they say they will have a problem solved in a week, it usually means a month or more. Whatever schedules they give us—the management—we normally have to pad, sometimes doubling them, before we use the schedules to plan or before we give them to our clients. It is just part of their nature to be optimistic, believing the tasks associated with the goals will go without a hitch, or the solution to a problem will be just one experiment away. Often if you ask a simple question, you’ll receive the “theory of everything” as a reply. If the question relates to a problem, the answer will involve the history of humankind and fingers will be pointed in every direction. I am exaggerating slightly to make a point, but as humorous as this may sound, there is more than a kernel of truth in what I’ve stated.

This type of optimism accompanied the founding of AI. The founders dreamed with sugarplums in their heads, and we wanted to believe it. We wanted the world to be easier. We wanted intelligent machines to do the heavy lifting and drudgery of everyday chores. We did not have to envision it. The science-fiction writers of television series such as Star Trek envisioned it for us, and we wanted to believe that artificial life-forms, such as Lieutenant Commander Data on Star Trek: The Next Generation, were just a decade away. However, that is not what happened. The field of AI did not change the world overnight or even in a decade. Much like a ninja, it slowly and invisibly crept into our lives over the last half century, disguised behind “smart” applications.

After several starts and stops and two AI winters, AI researchers and engineers started to get it right. Instead of building a do-it-all intelligent machine, they focused on solving specific applications. To address the applications, researchers pursued various approaches for specific intelligent systems. After accomplishing that, they began to integrate the approaches, which brought us closer to artificial “general” intelligence, equal to human intelligence.

Many people not engaged in professional scientific research believe that scientists and engineers follow a strict orderly process, sometimes referred to as the “scientific method,” to develop and apply new technology. Let me dispel that paradigm. It is simply not true. In many cases a scientific field is approached via many different angles, and the approaches depend on the experience and paradigms of those involved. This is especially true in regard to AI research, as will soon become apparent.

The most important concept to understand is that no unifying theory guides AI research. Researchers disagree among themselves, and we have more questions than answers. Here are two major questions that still haunt AI research.

  1. Should AI simulate human intelligence, incorporating the sciences of psychology and neurology, or is human biology irrelevant?
  2. Can AI, simulating a human mind, be developed using simple principles, such as logic and mechanical reasoning, or does it require solving a large number of completely unrelated problems?

Why do the above questions still haunt AI? We will discuss this in the next post (Part 2)

Source: The Artificial Intelligence Revolution (2014)

Close-up of a glowing microchip on a dark blue circuit board, highlighting intricate electronic components.

Moore’s Law As It Applies to Artificial Intelligence – Part 2/2

As previously mentioned in part 1 of this blog post, Moore’s law is not a physical law of science. Rather it may be considered a trend or a general rule. This begs the following question. How Long Will Moore’s Law Hold?

There are numerous estimates regarding how long Moore’s law will hold. Since it is not a physical law, its applicability is routinely questioned. For approximately the last half century, each estimate, at various points in time, has predicted that Moore’s law would hold for another decade. This has been occurring for almost five decades.

In 2005 Gordon Moore stated in an interview that Moore’s law “can’t continue forever. The nature of exponentials is that you push them out and eventually disaster happens.” Moore noted that transistors eventually would reach the limits of miniaturization at atomic levels. “In terms of size [of transistors] you can see that we’re approaching the size of atoms, which is a fundamental barrier, but it’ll be two or three generations before we get that far—but that’s as far out as we’ve ever been able to see. We have another 10 to 20 years before we reach a fundamental limit.”

However, new technologies are emerging to use molecules individually positioned, replacing transistors altogether. This means computer “switches” will not be transistors but molecules. The position of the molecules will be the new switches. This technology is predicted to emerge by 2020 (Baptiste Waldner, Nanocomputers and Swarm Intelligence, 2008).

Some see Moore’s law extending far into the future. Lawrence Krauss and Glenn D. Starkman predicted an ultimate limit of around six hundred years (Lawrence M. Krauss, Glenn D. Starkman, “Universal Limits of Computation,” arXiv:astro-ph/0404510, May 10, 2004).

I worked in the semiconductor industry for more than thirty years, during which time Moore’s law always appeared as if it would reach an impenetrable barrier. This, however, did not happen. New technologies constantly seemed to provide a stay of execution. We know that at some point the trend may change, but no one really has made a definitive case as to when this trend will end. The difficulty in predicting the end has to do with how one interprets Moore’s law. If one takes Moore’s original interpretation, which defined the trend in terms of the number of transistors that could be put on an integrated circuit, the end point may be somewhere around 2018 to 2020. Defining it in terms of “data density of an integrated circuit,” however, as we did regarding AI, removes the constraint of transistors and opens up a new array of technologies, including molecular positioning.

Will Moore’s law hold for another decade or another six hundred years? No one really knows the answer. Most people believe that eventually the trend will end, but when and why remain unanswered questions. If it does end, and Moore’s law no longer applies, another question emerges.

What Will Replace Moore’s Law?

Ray Kurzweil views Moore’s law in much the same way we defined it, not tied to specific technologies but rather as a “paradigm to forecast accelerating price-performance ratios.” From Kurzweil’s viewpoint:

 Moore’s law of Integrated Circuits was not the first, but the fifth paradigm to forecast accelerating price-performance ratios. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to [Newman’s] relay-based “[Heath] Robinson” machine that cracked the Lorenz cipher, to the CBS vacuum tube computer that predicted the election of Eisenhower, to the transistor-based machines used in the first space launches, to the integrated circuit-based personal computer. (Raymond Kurzweil, “The Law of Accelerating Returns,” www.KurzweilAI.net)

In the wider sense, Moore’s law is not about transistors or specific technologies. In my opinion it is a paradigm related to humankind’s creativity. The new computers following Moore’s law may be based on some new type of technology (e.g., optical computers, quantum computers, DNA computing) that bears little to no resemblance to current integrated-circuit technology. It appears that what Moore really uncovered was humankind’s ability to cost-effectively accelerate technology performance.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A pair of headphones hangs in front of a glowing red and white "ON AIR" sign in a radio studio.

Del Monte Radio Interview: The Artificial Intelligence Revolution

This is a recording of my discussion with John Counsell on “Late Night Counsell” AM580/CFRA Ottawa. The discussion was based on my new book, The Artificial Intelligence Revolution (2014). You can listen to the interview and call-ins by clicking to listen to the June 26, 2014 show at this URL:http://tunein.com/radio/Late-Night-Counsell-p50752. The page archives the recent “Late Night Counsell” shows. To listen to my interview click on the June 26, 2014 show.

Close-up of a glowing microchip on a dark blue circuit board, highlighting intricate electronic components.

Moore’s Law As It Applies to Artificial Intelligence – Part 1/2

Intel cofounder Gordon E. Moore was the first to note a peculiar trend, namely that the number of components in integrated circuits had doubled every year from the 1958 invention of the integrated circuit until 1965. In Moore’s own words:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.…Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer. (Gordon E. Moore, “Cramming More Components onto Integrated Circuits,” Electronics magazine, 1965)

 In 1970 Caltech professor, VLSI pioneer, and entrepreneur Carver Mead coined the term “Moore’s law,” referring to a statement made by Gordon E. Moore, and the phrase caught on within the scientific community.

In 1975 Moore revised his prediction regarding the number of components in integrated circuits doubling every year to doubling every two years. Intel executive David House noted that Moore’s latest prediction would cause computer performance to double every eighteen months, due to the combination of not only more transistors but also the transistors themselves becoming faster.

From the above discussion, it is obvious that Moore’s law has been stated a number of ways and has changed over time. In the strict sense, it is not a physical law but more of an observation and guideline for planning. In fact many semiconductor companies use Moore’s law to plan their long-term product offerings. There is a deeply held belief in the semiconductor industry that adhering to Moore’s law is required to remain competitive. In this sense it has become a self-fulfilling prophecy. For our purposes in understanding AI, let us address the following question.

What Is Moore’s law?

As it applies to AI, we will define Moore’s law as follows: The data density of an integrated circuit and the associated computer performance will cost-effectively double every eighteen months. If we consider eighteen months to represent a technology generation, this means every eighteen months we receive double the data density and associated computer performance at approximately the same cost as the previous generation. Most experts, including Moore, expect Moore’s law to hold for at least another two decades, but this is debatable, as I discuss later in the chapter. Below is a graphical depiction (courtesy of Wikimedia Commons) of Moore’s law, illustrating transistor counts for integrated circuits plotted against their dates of introduction (1971–2011).

As previously mentioned, Moore’s law is not a physical law of science. Rather it may be considered a trend or a general rule. This begs the following question. How Long Will Moore’s Law Hold? We will address this and other questions in part 2 of this post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital illustration of a human head with glowing neural connections representing brain activity and intelligence.

Will Strong Artificially Intelligent Machines Become Self-Conscious? Part 2/2 (Conclusion)

Part 1 of  this post ended with an important question: “How can we determine whether an intelligent machine has become conscious (self-aware)?” We will address this question in this post, along with some  ethical dilemmas.

We do not have a way yet to determine whether even another human is self-aware. I only know that I am self-aware. I assume that since we share the same physiology, including similar human brains, you are probably self-aware as well. However, even if we discuss various topics, and I conclude that your intelligence is equal to mine, I still cannot prove you are self-aware. Only you know whether you are self-aware.

The problem becomes even more difficult when dealing with an intelligent machine. The gold standard for an intelligent machine’s being equal to the human mind is the Turing test, which I discuss in chapter 5 of my book, The Artificial Intelligence Revolution. (If you are not familiar with the Turing test, a simple Google search will provide numerous sources to learn about it.) As of today no intelligent machine can pass the Turing test unless its interactions are restricted to a specific topic, such as chess. However, even if an intelligent machine does pass the Turing test and exhibits strong AI, how can we be sure it is self-aware? Intelligence may be a necessary condition for self-awareness, but it may not be sufficient. The machine may be able to emulate consciousness to the point that we conclude it must be self-aware, but that does not equal proof.

Even though other tests, such as the ConsScale test, have been proposed to determine machine consciousness, we still come up short. The ConsScale test evaluates the presence of features inspired by biological systems, such as social behavior. It also measures the cognitive development of an intelligent machine. This is based on the assumption that intelligence and consciousness are strongly related. The community of AI researchers, however, does not universally accept the ConsScale test as proof of consciousness. In the final analysis, I believe most AI researchers agree on only two points:

  1. There is no widely accepted empirical definition of consciousness (self-awareness).
  2. A test to determine the presence of consciousness (self-awareness) may be impossible, even if the subject being tested is a human being.

The above two points, however, do not rule out the possibility of intelligent machines becoming conscious and self-aware. They merely make the point that it will be extremely difficult to prove consciousness and self-awareness.

Ray Kurzweil predicts that by 2029 reverse engineering of the human brain will be completed, and nonbiological intelligence will combine the subtlety and pattern-recognition strength of human intelligence with the speed, memory, and knowledge sharing of machine intelligence (The Age of Spiritual Machines, 1999). I interpret this to mean that all aspects of the human brain will be replicated in an intelligent machine, including artificial consciousness. At this point intelligent machines either will become self-aware or emulate self-awareness to the point that they are indistinguishable from their human counterparts.

Self-aware intelligent machines being equivalent to human minds presents humankind with two serious ethical dilemmas.

  1. Should self-aware machines be considered a new life-form?
  2. Should self-aware machines have “machine rights” similar to human rights?

Since a self-aware intelligent machine that is equivalent to a human mind is still a theoretical subject, the ethics addressing the above two questions have not been discussed or developed to any great extent. Kurzweil, however, predicts that self-aware intelligent machines on par with or exceeding the human mind eventually will obtain legal rights by the end of the twenty-first century. Perhaps, he is correct, but I think we need to be extremely careful regarding what legal rights self-aware intelligent machines are granted. If they are given rights on par with humans, we may have situation where the machines become the dominant species on this planet and pose a potential threat to humankind. More about this in upcoming posts.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A stylized blue and white vintage microphone with musical notes in the background.

“The Artificial Intelligence Revolution” Interview Featured On Blog Talk Radio

My interview on Johnny Tan’s program (From My Mama’s Kitchen®) is featured as one of “Today’s Best” on Blog Talk Radio’s home page. This is a great honor. Below is the player from our interview. It displays a slide show of my picture as well as the book cover while it plays the interview.

Discover Moms and Family Internet Radio with FMMK Talk Radio on BlogTalkRadio
A humanoid robot with an extended hand under the text 'The Artificial Intelligence Revolution' questioning AI's role in serving or replacing humans.

Louis Del Monte FMMK Talk Radio Interview on The Artificial Intelligence Revolution

You can listen and/or download my interview with Johnny Tan of FMMK talk radio discussing my new book, The Artificial Intelligence Revolution. We discuss and explore the potential benefits and threats strong artificially intelligent machines pose to humankind.

Click here to listen or download the interview “The Artificial Intelligence Revolution”

Digital illustration of a human head with glowing neural connections representing brain activity and intelligence.

Will Strong Artificially Intelligent Machines Become Self-Conscious? Part 1/2

A generally accepted definition is that a person is conscious if that person is aware of his or her surroundings. If you are self-aware, it means you are self-conscious. In other words you are aware of yourself as an individual or of your own being, actions, and thoughts. To understand this concept, let us start by exploring how the human brain processes consciousness. To the best of our current understanding, no one part of the brain is responsible for consciousness. In fact neuroscience (the scientific study of the nervous system) hypothesizes that consciousness is the result of the interoperation of various parts of the brain called “neural correlates of consciousness” (NCC). This idea suggests that at this time we do not completely understand how the human brain processes consciousness or becomes self-aware.

Is it possible for a machine to be self-conscious? Obviously, since we do not completely understand how the human brain processes consciousness to become self-aware, it is difficult to definitively argue that a machine can become self-conscious or obtain what is termed “artificial consciousness” (AC). This is why AI experts differ on this subject. Some AI experts (proponents) argue it is possible to build a machine with AC that emulates the interoperation (i.e., it works like the human brain) of the NCC. Opponents argue that it is not possible because we do not fully understand the NCC. To my mind they are both correct. It is not possible today to build a machine with a level of AC that emulates the self-consciousness of the human brain. However, I believe that in the future we will understand the human brain’s NCC interoperation and build a machine that emulates it. Nevertheless this topic is hotly debated.

Opponents argue that many physical differences exist between natural, organic systems and artificially constructed (e.g., computer) systems that preclude AC. The most vocal critic who holds this view is American philosopher Ned Block (1942– ), who argues that a system with the same functional states as a human is not necessarily conscious.

The most vocal proponent who argues that AC is plausible is Australian philosopher David Chalmers (1966– ). In his unpublished 1993 manuscript “A Computational Foundation for the Study of Cognition,” Chalmers argues that it is possible for computers to perform the right kinds of computations that would result in a conscious mind. He reasons that computers perform computations that can capture other systems’ abstract causal organization. Mental properties are abstract causal organization. Therefore computers that run the right kind of computations will become conscious.

This is a good place for us to ask an important question: “How can we determine whether an intelligent machine has become conscious (self-aware)?” We will address this question in the next post (Part 2).

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte