Tag Archives: louis del monte

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Artificial Intelligence Gives Rise to Intelligent Agents – Part 2/3

In the last post (Part 1/3), we made the point that no unifying theory guides AI research. Researchers disagree among themselves, and we have more questions than answers. Here are two major questions that still haunt AI research.

  1. Should AI simulate human intelligence, incorporating the sciences of psychology and neurology, or is human biology irrelevant?
  2. Can AI, simulating a human mind, be developed using simple principles, such as logic and mechanical reasoning, or does it require solving a large number of completely unrelated problems?

Why do the above questions still haunt AI? Let us take some examples.

  • Similar types of questions arose in other scientific fields. For example, in the early stages of aeronautics, engineers questioned whether flying machines should incorporate bird biology. Eventually bird biology proved to be a dead end and irrelevant to aeronautics.
  • When it comes to solving problems, humans rely heavily on our experience, and we augment it with reasoning. In business, for example, for every problem encountered, there are numerous solutions. The solution chosen is biased by the paradigms of those involved. If, for example, the problem is related to increasing the production of a product being manufactured, some managers may add more people to the work force, some may work at improving efficiency, and some may do both. I have long held the belief that for every problem we face in industry, there are at least ten solutions, and eight of them, although different, yield equivalent results. However, if you look at the previous example, you may be tempted to believe improving efficiency is a superior (i.e., more elegant) solution as opposed to increasing the work force. Improving efficiency, however, costs time and money. In many cases it is more expedient to increase the work force. My point is that humans approach solving a problem by using their accumulated life experiences, which may not even relate directly to the specific problem, and augment their life experiences with reasoning. Given the way human minds work, it is only natural to ask whether intelligent machines will have to approach problem solving in a similar way, namely by solving numerous unrelated problems as a path to the specific solution required.

Scientific work in AI dates back to the 1940s, long before the AI field had an official name. Early research in the 1940s and 1950s focused on attempting to simulate the human brain by using rudimentary cybernetics (i.e., control systems). Control systems use a two-step approach to controlling their environment.

    1. An action by the system generates some change in its environment.
    2. The system senses that change (i.e., feedback), which triggers the system to change in response.

A simple example of this type of control system is a thermostat. If you set it for a specific temperature, for example 72 degrees Fahrenheit, and the temperature drops below the set point, the thermostat will turn on the furnace. If the temperature increases above the set point, the thermostat will turn off the furnace. However, during the 1940s and 1950s, the entire area of brain simulation and cybernetics was a concept ahead of its time. While elements of these fields would survive, the approach of brain simulation and cybernetics was largely abandoned as access to computers became available in the mid-1950s.

With access to electronic digital programmable computers in the mid-1950s, AI researchers began to focus on symbol manipulation (i.e., the manipulation of mathematical expressions, as is found in algebra) to emulate human intelligence. Three institutions led the charge: Carnegie Mellon University, Stanford University, and the Massachusetts Institute of Technology (MIT). Each university had its own style of research, which the American philosopher John Haugeland (1945–2010) named “good old-fashioned AI” or “GOFAI.”

In the conclusion to this article (Part 3/3), we will discuss the approaches that researchers pursued using electronic digital programmable computers.

Source: The Artificial Intelligent Revolution (2014), Louis A. Del Monte

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Artificial Intelligence Gives Rise to Intelligent Agents – Part 1/3

The road to intelligent machines has been difficult, filled with hairpin curves, steep hills, crevices, potholes, intersections, stop signs, and occasionally smooth and straight sections. The initial over-the-top optimism of AI founders John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon set unrealistic expectations. According to their predictions, by now every household should have its own humanoid robot to cook, clean, and do yard work and every other conceivable household task we humans perform.

During the course of my career, I have managed hundreds of scientists and engineers. In my experience they are, for the most part, overly optimistic as a group. When they say something was finished, it usually means it’s in the final stages of testing or inspection. When they say they will have a problem solved in a week, it usually means a month or more. Whatever schedules they give us—the management—we normally have to pad, sometimes doubling them, before we use the schedules to plan or before we give them to our clients. It is just part of their nature to be optimistic, believing the tasks associated with the goals will go without a hitch, or the solution to a problem will be just one experiment away. Often if you ask a simple question, you’ll receive the “theory of everything” as a reply. If the question relates to a problem, the answer will involve the history of humankind and fingers will be pointed in every direction. I am exaggerating slightly to make a point, but as humorous as this may sound, there is more than a kernel of truth in what I’ve stated.

This type of optimism accompanied the founding of AI. The founders dreamed with sugarplums in their heads, and we wanted to believe it. We wanted the world to be easier. We wanted intelligent machines to do the heavy lifting and drudgery of everyday chores. We did not have to envision it. The science-fiction writers of television series such as Star Trek envisioned it for us, and we wanted to believe that artificial life-forms, such as Lieutenant Commander Data on Star Trek: The Next Generation, were just a decade away. However, that is not what happened. The field of AI did not change the world overnight or even in a decade. Much like a ninja, it slowly and invisibly crept into our lives over the last half century, disguised behind “smart” applications.

After several starts and stops and two AI winters, AI researchers and engineers started to get it right. Instead of building a do-it-all intelligent machine, they focused on solving specific applications. To address the applications, researchers pursued various approaches for specific intelligent systems. After accomplishing that, they began to integrate the approaches, which brought us closer to artificial “general” intelligence, equal to human intelligence.

Many people not engaged in professional scientific research believe that scientists and engineers follow a strict orderly process, sometimes referred to as the “scientific method,” to develop and apply new technology. Let me dispel that paradigm. It is simply not true. In many cases a scientific field is approached via many different angles, and the approaches depend on the experience and paradigms of those involved. This is especially true in regard to AI research, as will soon become apparent.

The most important concept to understand is that no unifying theory guides AI research. Researchers disagree among themselves, and we have more questions than answers. Here are two major questions that still haunt AI research.

  1. Should AI simulate human intelligence, incorporating the sciences of psychology and neurology, or is human biology irrelevant?
  2. Can AI, simulating a human mind, be developed using simple principles, such as logic and mechanical reasoning, or does it require solving a large number of completely unrelated problems?

Why do the above questions still haunt AI? We will discuss this in the next post (Part 2)

Source: The Artificial Intelligence Revolution (2014)

Close-up of a glowing microchip on a dark blue circuit board, highlighting intricate electronic components.

Moore’s Law As It Applies to Artificial Intelligence – Part 2/2

As previously mentioned in part 1 of this blog post, Moore’s law is not a physical law of science. Rather it may be considered a trend or a general rule. This begs the following question. How Long Will Moore’s Law Hold?

There are numerous estimates regarding how long Moore’s law will hold. Since it is not a physical law, its applicability is routinely questioned. For approximately the last half century, each estimate, at various points in time, has predicted that Moore’s law would hold for another decade. This has been occurring for almost five decades.

In 2005 Gordon Moore stated in an interview that Moore’s law “can’t continue forever. The nature of exponentials is that you push them out and eventually disaster happens.” Moore noted that transistors eventually would reach the limits of miniaturization at atomic levels. “In terms of size [of transistors] you can see that we’re approaching the size of atoms, which is a fundamental barrier, but it’ll be two or three generations before we get that far—but that’s as far out as we’ve ever been able to see. We have another 10 to 20 years before we reach a fundamental limit.”

However, new technologies are emerging to use molecules individually positioned, replacing transistors altogether. This means computer “switches” will not be transistors but molecules. The position of the molecules will be the new switches. This technology is predicted to emerge by 2020 (Baptiste Waldner, Nanocomputers and Swarm Intelligence, 2008).

Some see Moore’s law extending far into the future. Lawrence Krauss and Glenn D. Starkman predicted an ultimate limit of around six hundred years (Lawrence M. Krauss, Glenn D. Starkman, “Universal Limits of Computation,” arXiv:astro-ph/0404510, May 10, 2004).

I worked in the semiconductor industry for more than thirty years, during which time Moore’s law always appeared as if it would reach an impenetrable barrier. This, however, did not happen. New technologies constantly seemed to provide a stay of execution. We know that at some point the trend may change, but no one really has made a definitive case as to when this trend will end. The difficulty in predicting the end has to do with how one interprets Moore’s law. If one takes Moore’s original interpretation, which defined the trend in terms of the number of transistors that could be put on an integrated circuit, the end point may be somewhere around 2018 to 2020. Defining it in terms of “data density of an integrated circuit,” however, as we did regarding AI, removes the constraint of transistors and opens up a new array of technologies, including molecular positioning.

Will Moore’s law hold for another decade or another six hundred years? No one really knows the answer. Most people believe that eventually the trend will end, but when and why remain unanswered questions. If it does end, and Moore’s law no longer applies, another question emerges.

What Will Replace Moore’s Law?

Ray Kurzweil views Moore’s law in much the same way we defined it, not tied to specific technologies but rather as a “paradigm to forecast accelerating price-performance ratios.” From Kurzweil’s viewpoint:

 Moore’s law of Integrated Circuits was not the first, but the fifth paradigm to forecast accelerating price-performance ratios. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to [Newman’s] relay-based “[Heath] Robinson” machine that cracked the Lorenz cipher, to the CBS vacuum tube computer that predicted the election of Eisenhower, to the transistor-based machines used in the first space launches, to the integrated circuit-based personal computer. (Raymond Kurzweil, “The Law of Accelerating Returns,” www.KurzweilAI.net)

In the wider sense, Moore’s law is not about transistors or specific technologies. In my opinion it is a paradigm related to humankind’s creativity. The new computers following Moore’s law may be based on some new type of technology (e.g., optical computers, quantum computers, DNA computing) that bears little to no resemblance to current integrated-circuit technology. It appears that what Moore really uncovered was humankind’s ability to cost-effectively accelerate technology performance.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A pair of headphones hangs in front of a glowing red and white "ON AIR" sign in a radio studio.

Del Monte Radio Interview: The Artificial Intelligence Revolution

This is a recording of my discussion with John Counsell on “Late Night Counsell” AM580/CFRA Ottawa. The discussion was based on my new book, The Artificial Intelligence Revolution (2014). You can listen to the interview and call-ins by clicking to listen to the June 26, 2014 show at this URL:https://tunein.com/radio/Late-Night-Counsell-p50752. The page archives the recent “Late Night Counsell” shows. To listen to my interview click on the June 26, 2014 show.

Close-up of a glowing microchip on a dark blue circuit board, highlighting intricate electronic components.

Moore’s Law As It Applies to Artificial Intelligence – Part 1/2

Intel cofounder Gordon E. Moore was the first to note a peculiar trend, namely that the number of components in integrated circuits had doubled every year from the 1958 invention of the integrated circuit until 1965. In Moore’s own words:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.…Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer. (Gordon E. Moore, “Cramming More Components onto Integrated Circuits,” Electronics magazine, 1965)

 In 1970 Caltech professor, VLSI pioneer, and entrepreneur Carver Mead coined the term “Moore’s law,” referring to a statement made by Gordon E. Moore, and the phrase caught on within the scientific community.

In 1975 Moore revised his prediction regarding the number of components in integrated circuits doubling every year to doubling every two years. Intel executive David House noted that Moore’s latest prediction would cause computer performance to double every eighteen months, due to the combination of not only more transistors but also the transistors themselves becoming faster.

From the above discussion, it is obvious that Moore’s law has been stated a number of ways and has changed over time. In the strict sense, it is not a physical law but more of an observation and guideline for planning. In fact many semiconductor companies use Moore’s law to plan their long-term product offerings. There is a deeply held belief in the semiconductor industry that adhering to Moore’s law is required to remain competitive. In this sense it has become a self-fulfilling prophecy. For our purposes in understanding AI, let us address the following question.

What Is Moore’s law?

As it applies to AI, we will define Moore’s law as follows: The data density of an integrated circuit and the associated computer performance will cost-effectively double every eighteen months. If we consider eighteen months to represent a technology generation, this means every eighteen months we receive double the data density and associated computer performance at approximately the same cost as the previous generation. Most experts, including Moore, expect Moore’s law to hold for at least another two decades, but this is debatable, as I discuss later in the chapter. Below is a graphical depiction (courtesy of Wikimedia Commons) of Moore’s law, illustrating transistor counts for integrated circuits plotted against their dates of introduction (1971–2011).

As previously mentioned, Moore’s law is not a physical law of science. Rather it may be considered a trend or a general rule. This begs the following question. How Long Will Moore’s Law Hold? We will address this and other questions in part 2 of this post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte