Tag Archives: artificial intelligence

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Artificial Intelligence Gives Rise to Intelligent Agents – Part 2/3

In the last post (Part 1/3), we made the point that no unifying theory guides AI research. Researchers disagree among themselves, and we have more questions than answers. Here are two major questions that still haunt AI research.

  1. Should AI simulate human intelligence, incorporating the sciences of psychology and neurology, or is human biology irrelevant?
  2. Can AI, simulating a human mind, be developed using simple principles, such as logic and mechanical reasoning, or does it require solving a large number of completely unrelated problems?

Why do the above questions still haunt AI? Let us take some examples.

  • Similar types of questions arose in other scientific fields. For example, in the early stages of aeronautics, engineers questioned whether flying machines should incorporate bird biology. Eventually bird biology proved to be a dead end and irrelevant to aeronautics.
  • When it comes to solving problems, humans rely heavily on our experience, and we augment it with reasoning. In business, for example, for every problem encountered, there are numerous solutions. The solution chosen is biased by the paradigms of those involved. If, for example, the problem is related to increasing the production of a product being manufactured, some managers may add more people to the work force, some may work at improving efficiency, and some may do both. I have long held the belief that for every problem we face in industry, there are at least ten solutions, and eight of them, although different, yield equivalent results. However, if you look at the previous example, you may be tempted to believe improving efficiency is a superior (i.e., more elegant) solution as opposed to increasing the work force. Improving efficiency, however, costs time and money. In many cases it is more expedient to increase the work force. My point is that humans approach solving a problem by using their accumulated life experiences, which may not even relate directly to the specific problem, and augment their life experiences with reasoning. Given the way human minds work, it is only natural to ask whether intelligent machines will have to approach problem solving in a similar way, namely by solving numerous unrelated problems as a path to the specific solution required.

Scientific work in AI dates back to the 1940s, long before the AI field had an official name. Early research in the 1940s and 1950s focused on attempting to simulate the human brain by using rudimentary cybernetics (i.e., control systems). Control systems use a two-step approach to controlling their environment.

    1. An action by the system generates some change in its environment.
    2. The system senses that change (i.e., feedback), which triggers the system to change in response.

A simple example of this type of control system is a thermostat. If you set it for a specific temperature, for example 72 degrees Fahrenheit, and the temperature drops below the set point, the thermostat will turn on the furnace. If the temperature increases above the set point, the thermostat will turn off the furnace. However, during the 1940s and 1950s, the entire area of brain simulation and cybernetics was a concept ahead of its time. While elements of these fields would survive, the approach of brain simulation and cybernetics was largely abandoned as access to computers became available in the mid-1950s.

With access to electronic digital programmable computers in the mid-1950s, AI researchers began to focus on symbol manipulation (i.e., the manipulation of mathematical expressions, as is found in algebra) to emulate human intelligence. Three institutions led the charge: Carnegie Mellon University, Stanford University, and the Massachusetts Institute of Technology (MIT). Each university had its own style of research, which the American philosopher John Haugeland (1945–2010) named “good old-fashioned AI” or “GOFAI.”

In the conclusion to this article (Part 3/3), we will discuss the approaches that researchers pursued using electronic digital programmable computers.

Source: The Artificial Intelligent Revolution (2014), Louis A. Del Monte

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Artificial Intelligence Gives Rise to Intelligent Agents – Part 1/3

The road to intelligent machines has been difficult, filled with hairpin curves, steep hills, crevices, potholes, intersections, stop signs, and occasionally smooth and straight sections. The initial over-the-top optimism of AI founders John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon set unrealistic expectations. According to their predictions, by now every household should have its own humanoid robot to cook, clean, and do yard work and every other conceivable household task we humans perform.

During the course of my career, I have managed hundreds of scientists and engineers. In my experience they are, for the most part, overly optimistic as a group. When they say something was finished, it usually means it’s in the final stages of testing or inspection. When they say they will have a problem solved in a week, it usually means a month or more. Whatever schedules they give us—the management—we normally have to pad, sometimes doubling them, before we use the schedules to plan or before we give them to our clients. It is just part of their nature to be optimistic, believing the tasks associated with the goals will go without a hitch, or the solution to a problem will be just one experiment away. Often if you ask a simple question, you’ll receive the “theory of everything” as a reply. If the question relates to a problem, the answer will involve the history of humankind and fingers will be pointed in every direction. I am exaggerating slightly to make a point, but as humorous as this may sound, there is more than a kernel of truth in what I’ve stated.

This type of optimism accompanied the founding of AI. The founders dreamed with sugarplums in their heads, and we wanted to believe it. We wanted the world to be easier. We wanted intelligent machines to do the heavy lifting and drudgery of everyday chores. We did not have to envision it. The science-fiction writers of television series such as Star Trek envisioned it for us, and we wanted to believe that artificial life-forms, such as Lieutenant Commander Data on Star Trek: The Next Generation, were just a decade away. However, that is not what happened. The field of AI did not change the world overnight or even in a decade. Much like a ninja, it slowly and invisibly crept into our lives over the last half century, disguised behind “smart” applications.

After several starts and stops and two AI winters, AI researchers and engineers started to get it right. Instead of building a do-it-all intelligent machine, they focused on solving specific applications. To address the applications, researchers pursued various approaches for specific intelligent systems. After accomplishing that, they began to integrate the approaches, which brought us closer to artificial “general” intelligence, equal to human intelligence.

Many people not engaged in professional scientific research believe that scientists and engineers follow a strict orderly process, sometimes referred to as the “scientific method,” to develop and apply new technology. Let me dispel that paradigm. It is simply not true. In many cases a scientific field is approached via many different angles, and the approaches depend on the experience and paradigms of those involved. This is especially true in regard to AI research, as will soon become apparent.

The most important concept to understand is that no unifying theory guides AI research. Researchers disagree among themselves, and we have more questions than answers. Here are two major questions that still haunt AI research.

  1. Should AI simulate human intelligence, incorporating the sciences of psychology and neurology, or is human biology irrelevant?
  2. Can AI, simulating a human mind, be developed using simple principles, such as logic and mechanical reasoning, or does it require solving a large number of completely unrelated problems?

Why do the above questions still haunt AI? We will discuss this in the next post (Part 2)

Source: The Artificial Intelligence Revolution (2014)

A pair of headphones hangs in front of a glowing red and white "ON AIR" sign in a radio studio.

Del Monte Radio Interview: The Artificial Intelligence Revolution

This is a recording of my discussion with John Counsell on “Late Night Counsell” AM580/CFRA Ottawa. The discussion was based on my new book, The Artificial Intelligence Revolution (2014). You can listen to the interview and call-ins by clicking to listen to the June 26, 2014 show at this URL:https://tunein.com/radio/Late-Night-Counsell-p50752. The page archives the recent “Late Night Counsell” shows. To listen to my interview click on the June 26, 2014 show.

Close-up of a glowing microchip on a dark blue circuit board, highlighting intricate electronic components.

Moore’s Law As It Applies to Artificial Intelligence – Part 1/2

Intel cofounder Gordon E. Moore was the first to note a peculiar trend, namely that the number of components in integrated circuits had doubled every year from the 1958 invention of the integrated circuit until 1965. In Moore’s own words:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.…Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer. (Gordon E. Moore, “Cramming More Components onto Integrated Circuits,” Electronics magazine, 1965)

 In 1970 Caltech professor, VLSI pioneer, and entrepreneur Carver Mead coined the term “Moore’s law,” referring to a statement made by Gordon E. Moore, and the phrase caught on within the scientific community.

In 1975 Moore revised his prediction regarding the number of components in integrated circuits doubling every year to doubling every two years. Intel executive David House noted that Moore’s latest prediction would cause computer performance to double every eighteen months, due to the combination of not only more transistors but also the transistors themselves becoming faster.

From the above discussion, it is obvious that Moore’s law has been stated a number of ways and has changed over time. In the strict sense, it is not a physical law but more of an observation and guideline for planning. In fact many semiconductor companies use Moore’s law to plan their long-term product offerings. There is a deeply held belief in the semiconductor industry that adhering to Moore’s law is required to remain competitive. In this sense it has become a self-fulfilling prophecy. For our purposes in understanding AI, let us address the following question.

What Is Moore’s law?

As it applies to AI, we will define Moore’s law as follows: The data density of an integrated circuit and the associated computer performance will cost-effectively double every eighteen months. If we consider eighteen months to represent a technology generation, this means every eighteen months we receive double the data density and associated computer performance at approximately the same cost as the previous generation. Most experts, including Moore, expect Moore’s law to hold for at least another two decades, but this is debatable, as I discuss later in the chapter. Below is a graphical depiction (courtesy of Wikimedia Commons) of Moore’s law, illustrating transistor counts for integrated circuits plotted against their dates of introduction (1971–2011).

As previously mentioned, Moore’s law is not a physical law of science. Rather it may be considered a trend or a general rule. This begs the following question. How Long Will Moore’s Law Hold? We will address this and other questions in part 2 of this post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital illustration of a human head with glowing neural connections representing brain activity and intelligence.

Will Strong Artificially Intelligent Machines Become Self-Conscious? Part 2/2 (Conclusion)

Part 1 of  this post ended with an important question: “How can we determine whether an intelligent machine has become conscious (self-aware)?” We will address this question in this post, along with some  ethical dilemmas.

We do not have a way yet to determine whether even another human is self-aware. I only know that I am self-aware. I assume that since we share the same physiology, including similar human brains, you are probably self-aware as well. However, even if we discuss various topics, and I conclude that your intelligence is equal to mine, I still cannot prove you are self-aware. Only you know whether you are self-aware.

The problem becomes even more difficult when dealing with an intelligent machine. The gold standard for an intelligent machine’s being equal to the human mind is the Turing test, which I discuss in chapter 5 of my book, The Artificial Intelligence Revolution. (If you are not familiar with the Turing test, a simple Google search will provide numerous sources to learn about it.) As of today no intelligent machine can pass the Turing test unless its interactions are restricted to a specific topic, such as chess. However, even if an intelligent machine does pass the Turing test and exhibits strong AI, how can we be sure it is self-aware? Intelligence may be a necessary condition for self-awareness, but it may not be sufficient. The machine may be able to emulate consciousness to the point that we conclude it must be self-aware, but that does not equal proof.

Even though other tests, such as the ConsScale test, have been proposed to determine machine consciousness, we still come up short. The ConsScale test evaluates the presence of features inspired by biological systems, such as social behavior. It also measures the cognitive development of an intelligent machine. This is based on the assumption that intelligence and consciousness are strongly related. The community of AI researchers, however, does not universally accept the ConsScale test as proof of consciousness. In the final analysis, I believe most AI researchers agree on only two points:

  1. There is no widely accepted empirical definition of consciousness (self-awareness).
  2. A test to determine the presence of consciousness (self-awareness) may be impossible, even if the subject being tested is a human being.

The above two points, however, do not rule out the possibility of intelligent machines becoming conscious and self-aware. They merely make the point that it will be extremely difficult to prove consciousness and self-awareness.

Ray Kurzweil predicts that by 2029 reverse engineering of the human brain will be completed, and nonbiological intelligence will combine the subtlety and pattern-recognition strength of human intelligence with the speed, memory, and knowledge sharing of machine intelligence (The Age of Spiritual Machines, 1999). I interpret this to mean that all aspects of the human brain will be replicated in an intelligent machine, including artificial consciousness. At this point intelligent machines either will become self-aware or emulate self-awareness to the point that they are indistinguishable from their human counterparts.

Self-aware intelligent machines being equivalent to human minds presents humankind with two serious ethical dilemmas.

  1. Should self-aware machines be considered a new life-form?
  2. Should self-aware machines have “machine rights” similar to human rights?

Since a self-aware intelligent machine that is equivalent to a human mind is still a theoretical subject, the ethics addressing the above two questions have not been discussed or developed to any great extent. Kurzweil, however, predicts that self-aware intelligent machines on par with or exceeding the human mind eventually will obtain legal rights by the end of the twenty-first century. Perhaps, he is correct, but I think we need to be extremely careful regarding what legal rights self-aware intelligent machines are granted. If they are given rights on par with humans, we may have situation where the machines become the dominant species on this planet and pose a potential threat to humankind. More about this in upcoming posts.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte