Category Archives: Technology

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Can We Control the Singularity? Part 1/2

Highly regarded AI researchers and futurists have provided answers that cover the extremes, and everything in between, regarding whether we can control the singularity. I will discuss some of these answers shortly, but let us start by reviewing what is meant by “singularity.” As first described by John von Neumann in 1955, the singularity represents a point in time when the intelligence of machines will greatly exceed that of humans. This simple understanding of the word does not seem to be particularly threatening. Therefore it is reasonable to ask why we should care about controlling the singularity.

The singularity poses a completely unknown situation. Currently we do not have any intelligent machines (those with strong AI) that are as intelligent as a human being let alone possess far-superior intelligence to that of humans. The singularity would represent a point in humankind’s history that never has occurred. In 1997 we experienced a small glimpse of what it might feel like, when IBM’s chess-playing computer Deep Blue became the first computer to beat world-class chess champion Garry Kasparov. Now imagine being surrounded by SAMs that are thousands of times more intelligent than you are, regardless of your expertise in any discipline. This may be analogous to humans’ intelligence relative to insects.

Your first instinct may be to argue that this is not a possibility. However, while futurists disagree on the exact timing when the singularity will occur, they almost unanimously agree it will occur. In fact the only thing they argue that could prevent it from occurring is an existential event (such as an event that leads to the extinction of humankind). I provide numerous examples of existential events in my book Unraveling the Universe’s Mysteries (2012). For clarity I will quote one here.

 Nuclear war—For approximately the last forty years, humankind has had the capability to exterminate itself. Few doubt that an all-out nuclear war would be devastating to humankind, killing millions in the nuclear explosions. Millions more would die of radiation poisoning. Uncountable millions more would die in a nuclear winter, caused by the debris thrown into the atmosphere, which would block the sunlight from reaching the Earth’s surface. Estimates predict the nuclear winter could last as long as a millennium.

Essentially AI researchers and futurists believe that the singularity will occur, unless we as a civilization cease to exist. The obvious question is: “When will the singularity occur?” AI researchers and futurists are all over the map regarding this. Some predict it will occur within a decade; others predict a century or more. At the 2012 Singularity Summit, Stuart Armstrong, a University of Oxford James Martin research fellow, conducted a poll regarding artificial generalized intelligence (AGI) predictions (i.e., the timing of the singularity) and found a median value of 2040. Kurzweil predicts 2045. The main point is that almost all AI researchers and futurists agree the singularity will occur unless humans cease to exist.

Why should we be concerned about controlling the singularity when it occurs? There are numerous scenarios that address this question, most of which boil down to SAMs (i.e., strong artificially intelligent machines) claiming the top of the food chain, leaving humans worse off. We will discuss this further in part 2.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital illustration of a human head with glowing neural connections representing brain activity and intelligence.

When Will an Artificially Intelligent Machine Display and Feel Human Emotions? Part 2/2

In our last post, we raised the question: “Will an intelligent machine ever be able to completely replicate a human mind?” Let’s now address it.

Experts disagree. Some experts—such as English mathematical physicist, recreational mathematician, and philosopher Roger Penrose—argue there is a limit as to what intelligent machines can do. Most experts, however, including Ray Kurzweil, argue that it will eventually be technologically feasible to copy the brain directly into an intelligent machine and that such a simulation will be identical to the original. The implication is that the intelligent machine will be a mind and be self-aware.

This begs one big question: “When will the intelligent machines become self-aware?”

A generally accepted definition is that a person is conscious if that person is aware of his or her surroundings. If you are self-aware, it means you are self-conscious. In other words you are aware of yourself as an individual or of your own being, actions, and thoughts. To understand this concept, let us start by exploring how the human brain processes consciousness. To the best of our current understanding, no one part of the brain is responsible for consciousness. In fact neuroscience (the scientific study of the nervous system) hypothesizes that consciousness is the result of the interoperation of various parts of the brain called “neural correlates of consciousness” (NCC). This idea suggests that at this time we do not completely understand how the human brain processes consciousness or becomes self-aware.

Is it possible for a machine to be self-conscious? Obviously, since we do not completely understand how the human brain processes consciousness to become self-aware, it is difficult to definitively argue that a machine can become self-conscious or obtain what is termed “artificial consciousness” (AC). This is why AI experts differ on this subject. Some AI experts (proponents) argue it is possible to build a machine with AC that emulates the interoperation (i.e., it works like the human brain) of the NCC. Opponents argue that it is not possible because we do not fully understand the NCC. To my mind they are both correct. It is not possible today to build a machine with a level of AC that emulates the self-consciousness of the human brain. However, I believe that in the future we will understand the human brain’s NCC interoperation and build a machine that emulates it. Nevertheless this topic is hotly debated.

Opponents argue that many physical differences exist between natural, organic systems and artificially constructed (e.g., computer) systems that preclude AC. The most vocal critic who holds this view is American philosopher Ned Block (1942– ), who argues that a system with the same functional states as a human is not necessarily conscious.

The most vocal proponent who argues that AC is plausible is Australian philosopher David Chalmers (1966– ). In his unpublished 1993 manuscript “A Computational Foundation for the Study of Cognition,” Chalmers argues that it is possible for computers to perform the right kinds of computations that would result in a conscious mind. He reasons that computers perform computations that can capture other systems’ abstract causal organization. Mental properties are abstract causal organization. Therefore computers that run the right kind of computations will become conscious.

Source:  The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

When Will an Artificially Intelligent Machine Display and Feel Human Emotions? Part 1/2

Affective computing is a relatively new science. It is the science of programming computers to recognize, interpret, process, and simulate human affects. The word “affects” refers to the experience or display of feelings or emotions.

While AI has achieved superhuman status in playing chess and quiz-show games, it does not have the emotional equivalence of a four-year-old child. For example a four-year-old may love to play with toys. The child laughs with delight as the toy performs some function, such as a toy cat meowing when it is squeezed. If you take the toy away from the child, the child may become sad and cry. Computers are unable to achieve any emotional response similar to that of a four-year-old child. Computers do not exhibit joy or sadness. Some researchers believe this is actually a good thing. The intelligent machine processes and acts on information without coloring it with emotions. When you go to an ATM, you will not have to argue with the ATM regarding whether you can afford to make a withdrawal, and a robotic assistant will not lose its temper if you do not thank it after it performs a service. Highly meaningful human interactions with intelligent machines, however, will require that machines simulate human affects, such as empathy. In fact some researchers argue that machines should be able to interpret the emotional state of humans and adapt their behavior accordingly, giving appropriate responses for those emotions. For example if you are in a state of panic because your spouse is apparently having a heart attack, when you ask the machine to call for medical assistance, it should understand the urgency. In addition it will be impossible for an intelligent machine to be truly equal to a human brain without the machine possessing human affects. For example how could an artificial human brain write a romance novel without understanding love, hate, and jealousy?

Progress concerning the development of computers with human affects has been slow. In fact this particular computer science originated with Rosalind Picard’s 1995 paper on affective computing (“Affective Computing,” MIT Technical Report #321, abstract, 1995). The single greatest problem involved in developing and programming computers to emulate the emotions of the human brain is that we do not fully understand how emotions are processed in the human brain. We are unable to pinpoint a specific area of the brain and scientifically argue that it is responsible for specific human emotions, which has raised questions. Are human emotions byproducts of human intelligence? Are they the result of distributed functions within the human brain? Are they learned, or are we born with them? There is no universal agreement regarding the answers to these questions. Nonetheless work on studying human affects and developing affective computing is continuing.

There are two major focuses in affective computing.

1. Detecting and recognizing emotional information: How do intelligent machines detect and recognize emotional information? It starts with sensors, which capture data regarding a subject’s physical state or behavior. The information gathered is processed using several affective computing technologies, including speech recognition, natural-language processing, and facial-expression detection. Using sophisticated algorithms, the intelligent machine predicts the subject’s affective state. For example the subject may be predicted to be angry or sad.

2. Developing or simulating emotion in machines: While researchers continue to develop intelligent machines with innate emotional capability, the technology is not to the level where this goal is achievable. Current technology, however, is capable of simulating emotions. For example when you provide information to a computer that is routing your telephone call, it may simulate gratitude and say, “Thank you.” This has proved useful in facilitating satisfying interactivity between humans and machines. The simulation of human emotions, especially in computer-synthesized speech, is improving continually. For example you may have noticed when ordering a prescription by phone that the synthesized computer voice sounds more human as each year passes.

All current technologies to detect, recognize, and simulate human emotions are based on human behavior and not on how the human mind works. The main reason for this approach is that we do not completely understand how the human mind works when it comes to human emotions. This carries an important implication. Current technology can detect, recognize, simulate, and act accordingly based on human behavior, but the machine does not feel any emotion. No matter how convincing the conversation or interaction, it is an act. The machine feels nothing. However, intelligent machines using simulated human affects have found numerous applications in the fields of e-learning, psychological health services, robotics, and digital pets.

It is only natural to ask, “Will an intelligent machine ever feel human affects?” This question raises a broader question: “Will an intelligent machine ever be able to completely replicate a human mind?” We will address this question in part 2.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Silhouette of a human head filled with interconnected gears representing thinking and mental processes.

How Do Intelligent Machines Learn?

How and under what conditions is it possible for an intelligent machine to learn? To address this question, let’s start with a definition of machine learning. The most widely accepted definition comes from Tom M. Mitchell, a American computer scientist and E. Fredkin University Professor at Carnegie Mellon University. Here is his formal definition: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” In simple terms machine learning requires a machine to learn similar to the way humans do, namely from experience, and continue to improve its performance as it gains more experience.

Machine learning is a branch of AI; it utilizes algorithms that improve automatically through experience. Machine learning also has been a focus of AI research since the field’s inception. There are numerous computer software programs, known as machine-learning algorithms, that use various computational techniques to predict outcomes of new, unseen experiences. The algorithms’ performance is a branch of theoretical computer science known as “computational learning theory.” What this means in simple terms is that an intelligent machine has in its memory data that relates to a finite set of experiences. The machine-learning algorithms (i.e., software) access this data for its similarity to a new experience and use a specific algorithm (or combination of algorithms) to guide the machine to predict an outcome of this new experience. Since the experience data in the machine’s memory is limited, the algorithms are unable to predict outcomes with certainty. Instead they associate a probability to a specific outcome and act in accordance with the highest probability. Optical character recognition is an example of machine learning. In this case the computer recognizes printed characters based on previous examples. As anyone who has ever used an optical character-recognition program knows, however, the programs are far from 100 percent accurate. In my experience the best case is a little more than 95 percent accurate when the text is clear and uses a common font.

There are eleven major machine-learning algorithms and numerous variations of these algorithms. To study and understand each would be a formidable task. Fortunately, though, machine-learning algorithms fall into three major classifications. By understanding these classifications,we can gain significant insight into the science of machine learning. Therefore let us review the three major classifications:

  1. Supervised learning: This class of algorithms infers a function (a way of mapping or relating an input to an output) from training data, which consists of training examples. Each example consists of an input object and a desired output value. Ideally the inferred function (generalized from the training data) allows the algorithm to analyze new data (unseen instances/inputs) and map it to (i.e., predict) a high-probability output.
  2. Unsupervised learning: This class of algorithms seeks to find hidden structures (patterns in data) in a stream of input (unlabeled data). Unlike in supervised learning, the examples presented to the learner are unlabeled, which makes it impossible to assign an error or reward to a potential solution.
  3. Reinforcement learning: Reinforcement learning was inspired by behaviorist psychology. It focuses on which actions an agent (an intelligent machine) should take to maximize a reward (for example a numerical value associated with utility). In effect the agent receives rewards for good responses and punishment for bad ones. The algorithms for reinforcement learning require the agent to take discrete time steps and calculate the reward as a function of having taken that step. At this point the agent takes another time step and again calculates the reward, which provides feedback to guide the agent’s next action. The agent’s goal is to collect as much reward as possible.

In essence machine learning incorporates four essential elements.

  1. Representation: The intelligent machine must be able to assimilate data (input) and transform it in a way that makes it useful for a specific algorithm.
  2. Generalization: The intelligent machine must be able to accurately map unseen data to similar data in the learning data set.
  3. Algorithm selection: After generalization the intelligent machine must choose and/or combine algorithms to make a computation (such as a decision or an evaluation).
  4. Feedback: After a computation, the intelligent machine must use feedback (such as a reward or punishment) to improve its ability to perform steps 1 through 3 above.

Machine learning is similar to human learning in many respects. The most difficult issue in machine learning is generalization or what is often referred to as abstraction. This is simply the ability to determine the features and structures of an object (i.e., data) relevant to solving the problem. Humans are excellent when it comes to abstracting the essence of an object. For example, regardless of the breed or type of dog, whether we see a small, large, multicolor, long-hair, short-hair, large-nose, or short-nose animal, we immediately recognize that the animal is a dog. Most four-year-old children immediately recognize dogs. However, most intelligent agents have a difficult time with generalization and require sophisticated computer programs to enable them to generalize.

Machine learning has come a long way since the 1972 introduction of Pong, the first game developed by Atari Inc. Today’s computer games are incredibly realistic, and the graphics are similar to watching a movie. Few of us can win a chess game on our computer or smartphone unless we set the difficulty level to low. In general machine learning appears to be accelerating, even faster than the field of AI as a whole. We may, however, see a bootstrap effect, in which machine learning results in highly intelligent agents that accelerate the development of artificial general intelligence, but there is more to the human mind than intelligence. One of the most important characteristics of our humanity is our ability to feel human emotions.

This raises an important question. When will computers be capable of feeling human emotions? A new science is emerging to address how to develop and program computers to be capable of simulating and eventually feeling human emotions. This new science is termed “affective computing.”  We will discuss affective computing in a future post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A man with glasses and a mustache wearing headphones and speaking into a microphone in a recording studio.

Artificial Intelligence Interview Podcast

Louis Del Monte on the Tom Barnard Show 7/23 discussing his new book, The Artificial Intelligence Revolution. During the interview we discuss  the future of AI and how it may impact humanity. You can listen to the complete interview at anytime via this link  http://www.tombarnardpodcast.com/july-23rd-2014-louis-del-monte-483-2/

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Artificial Intelligence Gives Rise to Intelligent Agents – Part 3/3 (Conclusion)

In conclusion, let’s discuss the approaches that researchers pursued using electronic digital programmable computers.

From the 1960s through the 1970s, symbolic approaches achieved success at simulating high-level thinking in specific application programs. For example, in 1963, Danny Bobrow’s technical report from MIT’s AI group proved that computers could understand natural language well enough to solve algebra word problems correctly. The success of symbolic approaches added credence to the belief that symbolic approaches eventually would succeed in creating a machine with artificial general intelligence, also known as “strong AI,” equivalent to a human mind’s intelligence.

By the 1980s, however, symbolic approaches had run their course and fallen short of the goal of artificial general intelligence. Many AI researchers felt symbolic approaches never would emulate the processes of human cognition, such as perception, learning, and pattern recognition. The next step was a small retreat, and a new era of AI research termed “subsymbolic” emerged. Instead of attempting general AI, researchers turned their attention to solving smaller specific problems. For example researchers such as Australian computer scientist and former MIT Panasonic Professor of Robotics Rodney Brooks rejected symbolic AI. Instead he focused on solving engineering problems related to enabling robots to move.

In the 1990s, concurrent with subsymbolic approaches, AI researchers began to incorporate statistical approaches, again addressing specific problems. Statistical methodologies involve advanced mathematics and are truly scientific in that they are both measurable and verifiable. Statistical approaches proved to be a highly successful AI methodology. The advanced mathematics that underpin statistical AI enabled collaboration with more established fields, including mathematics, economics, and operations research. Computer scientists Stuart Russell and Peter Norvig describe this movement as the victory of the “neats” over the “scruffies,” two major opposing schools of AI research. Neats assert that AI solutions should be elegant, clear, and provable. Scruffies, on the other hand, assert that intelligence is too complicated to adhere to neat methodology.

From the 1990s to the present, despite the arguments between neats, scruffies, and other AI schools, some of AI’s greatest successes have been the result of combining approaches, which has resulted in what is known as the “intelligent agent.” The intelligent agent is a system that interacts with its environment and takes calculated actions (i.e., based on their success probability) to achieve its goal. The intelligent agent can be a simple system, such as a thermostat, or a complex system, similar conceptually to a human being. Intelligent agents also can be combined to form multiagent systems, similar conceptually to a large corporation, with a hierarchical control system to bridge lower-level subsymbolic AI systems to higher-level symbolic AI systems.

The intelligent-agent approach, including integration of intelligent agents to form a hierarchy of multiagents, places no restriction on the AI methodology employed to achieve the goal. Rather than arguing philosophy, the emphasis is on achieving results. The key to achieving the greatest results has proven to be integrating approaches, much like a symphonic orchestra integrates a variety of instruments to perform a symphony.

In the last seventy years, the approach to achieving AI has been more like that of a machine gun firing broadly in the direction of the target than a well-aimed rifle shot. In fits of starts and stops, numerous schools of AI research have pushed the technology forward. Starting with the loftiest goals of emulating a human mind, retreating to solving specific well-defined problems, and now again aiming toward artificial general intelligence, AI research is a near-perfect example of all human technology development, exemplifying trial-and-error learning, interrupted with spurts of genius.

Although AI has come a long way in the last seventy years and has been able to equal and exceed human intelligence in specific areas, such as playing chess, it still falls short of general human intelligence or strong AI. There are two significant problems associated with strong AI. First, we need a machine with processing power equal to that of a human brain. Second, we need programs that allow such a machine to emulate a human brain.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Artificial Intelligence Gives Rise to Intelligent Agents – Part 2/3

In the last post (Part 1/3), we made the point that no unifying theory guides AI research. Researchers disagree among themselves, and we have more questions than answers. Here are two major questions that still haunt AI research.

  1. Should AI simulate human intelligence, incorporating the sciences of psychology and neurology, or is human biology irrelevant?
  2. Can AI, simulating a human mind, be developed using simple principles, such as logic and mechanical reasoning, or does it require solving a large number of completely unrelated problems?

Why do the above questions still haunt AI? Let us take some examples.

  • Similar types of questions arose in other scientific fields. For example, in the early stages of aeronautics, engineers questioned whether flying machines should incorporate bird biology. Eventually bird biology proved to be a dead end and irrelevant to aeronautics.
  • When it comes to solving problems, humans rely heavily on our experience, and we augment it with reasoning. In business, for example, for every problem encountered, there are numerous solutions. The solution chosen is biased by the paradigms of those involved. If, for example, the problem is related to increasing the production of a product being manufactured, some managers may add more people to the work force, some may work at improving efficiency, and some may do both. I have long held the belief that for every problem we face in industry, there are at least ten solutions, and eight of them, although different, yield equivalent results. However, if you look at the previous example, you may be tempted to believe improving efficiency is a superior (i.e., more elegant) solution as opposed to increasing the work force. Improving efficiency, however, costs time and money. In many cases it is more expedient to increase the work force. My point is that humans approach solving a problem by using their accumulated life experiences, which may not even relate directly to the specific problem, and augment their life experiences with reasoning. Given the way human minds work, it is only natural to ask whether intelligent machines will have to approach problem solving in a similar way, namely by solving numerous unrelated problems as a path to the specific solution required.

Scientific work in AI dates back to the 1940s, long before the AI field had an official name. Early research in the 1940s and 1950s focused on attempting to simulate the human brain by using rudimentary cybernetics (i.e., control systems). Control systems use a two-step approach to controlling their environment.

    1. An action by the system generates some change in its environment.
    2. The system senses that change (i.e., feedback), which triggers the system to change in response.

A simple example of this type of control system is a thermostat. If you set it for a specific temperature, for example 72 degrees Fahrenheit, and the temperature drops below the set point, the thermostat will turn on the furnace. If the temperature increases above the set point, the thermostat will turn off the furnace. However, during the 1940s and 1950s, the entire area of brain simulation and cybernetics was a concept ahead of its time. While elements of these fields would survive, the approach of brain simulation and cybernetics was largely abandoned as access to computers became available in the mid-1950s.

With access to electronic digital programmable computers in the mid-1950s, AI researchers began to focus on symbol manipulation (i.e., the manipulation of mathematical expressions, as is found in algebra) to emulate human intelligence. Three institutions led the charge: Carnegie Mellon University, Stanford University, and the Massachusetts Institute of Technology (MIT). Each university had its own style of research, which the American philosopher John Haugeland (1945–2010) named “good old-fashioned AI” or “GOFAI.”

In the conclusion to this article (Part 3/3), we will discuss the approaches that researchers pursued using electronic digital programmable computers.

Source: The Artificial Intelligent Revolution (2014), Louis A. Del Monte

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Artificial Intelligence Gives Rise to Intelligent Agents – Part 1/3

The road to intelligent machines has been difficult, filled with hairpin curves, steep hills, crevices, potholes, intersections, stop signs, and occasionally smooth and straight sections. The initial over-the-top optimism of AI founders John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon set unrealistic expectations. According to their predictions, by now every household should have its own humanoid robot to cook, clean, and do yard work and every other conceivable household task we humans perform.

During the course of my career, I have managed hundreds of scientists and engineers. In my experience they are, for the most part, overly optimistic as a group. When they say something was finished, it usually means it’s in the final stages of testing or inspection. When they say they will have a problem solved in a week, it usually means a month or more. Whatever schedules they give us—the management—we normally have to pad, sometimes doubling them, before we use the schedules to plan or before we give them to our clients. It is just part of their nature to be optimistic, believing the tasks associated with the goals will go without a hitch, or the solution to a problem will be just one experiment away. Often if you ask a simple question, you’ll receive the “theory of everything” as a reply. If the question relates to a problem, the answer will involve the history of humankind and fingers will be pointed in every direction. I am exaggerating slightly to make a point, but as humorous as this may sound, there is more than a kernel of truth in what I’ve stated.

This type of optimism accompanied the founding of AI. The founders dreamed with sugarplums in their heads, and we wanted to believe it. We wanted the world to be easier. We wanted intelligent machines to do the heavy lifting and drudgery of everyday chores. We did not have to envision it. The science-fiction writers of television series such as Star Trek envisioned it for us, and we wanted to believe that artificial life-forms, such as Lieutenant Commander Data on Star Trek: The Next Generation, were just a decade away. However, that is not what happened. The field of AI did not change the world overnight or even in a decade. Much like a ninja, it slowly and invisibly crept into our lives over the last half century, disguised behind “smart” applications.

After several starts and stops and two AI winters, AI researchers and engineers started to get it right. Instead of building a do-it-all intelligent machine, they focused on solving specific applications. To address the applications, researchers pursued various approaches for specific intelligent systems. After accomplishing that, they began to integrate the approaches, which brought us closer to artificial “general” intelligence, equal to human intelligence.

Many people not engaged in professional scientific research believe that scientists and engineers follow a strict orderly process, sometimes referred to as the “scientific method,” to develop and apply new technology. Let me dispel that paradigm. It is simply not true. In many cases a scientific field is approached via many different angles, and the approaches depend on the experience and paradigms of those involved. This is especially true in regard to AI research, as will soon become apparent.

The most important concept to understand is that no unifying theory guides AI research. Researchers disagree among themselves, and we have more questions than answers. Here are two major questions that still haunt AI research.

  1. Should AI simulate human intelligence, incorporating the sciences of psychology and neurology, or is human biology irrelevant?
  2. Can AI, simulating a human mind, be developed using simple principles, such as logic and mechanical reasoning, or does it require solving a large number of completely unrelated problems?

Why do the above questions still haunt AI? We will discuss this in the next post (Part 2)

Source: The Artificial Intelligence Revolution (2014)

Close-up of a glowing microchip on a dark blue circuit board, highlighting intricate electronic components.

Moore’s Law As It Applies to Artificial Intelligence – Part 2/2

As previously mentioned in part 1 of this blog post, Moore’s law is not a physical law of science. Rather it may be considered a trend or a general rule. This begs the following question. How Long Will Moore’s Law Hold?

There are numerous estimates regarding how long Moore’s law will hold. Since it is not a physical law, its applicability is routinely questioned. For approximately the last half century, each estimate, at various points in time, has predicted that Moore’s law would hold for another decade. This has been occurring for almost five decades.

In 2005 Gordon Moore stated in an interview that Moore’s law “can’t continue forever. The nature of exponentials is that you push them out and eventually disaster happens.” Moore noted that transistors eventually would reach the limits of miniaturization at atomic levels. “In terms of size [of transistors] you can see that we’re approaching the size of atoms, which is a fundamental barrier, but it’ll be two or three generations before we get that far—but that’s as far out as we’ve ever been able to see. We have another 10 to 20 years before we reach a fundamental limit.”

However, new technologies are emerging to use molecules individually positioned, replacing transistors altogether. This means computer “switches” will not be transistors but molecules. The position of the molecules will be the new switches. This technology is predicted to emerge by 2020 (Baptiste Waldner, Nanocomputers and Swarm Intelligence, 2008).

Some see Moore’s law extending far into the future. Lawrence Krauss and Glenn D. Starkman predicted an ultimate limit of around six hundred years (Lawrence M. Krauss, Glenn D. Starkman, “Universal Limits of Computation,” arXiv:astro-ph/0404510, May 10, 2004).

I worked in the semiconductor industry for more than thirty years, during which time Moore’s law always appeared as if it would reach an impenetrable barrier. This, however, did not happen. New technologies constantly seemed to provide a stay of execution. We know that at some point the trend may change, but no one really has made a definitive case as to when this trend will end. The difficulty in predicting the end has to do with how one interprets Moore’s law. If one takes Moore’s original interpretation, which defined the trend in terms of the number of transistors that could be put on an integrated circuit, the end point may be somewhere around 2018 to 2020. Defining it in terms of “data density of an integrated circuit,” however, as we did regarding AI, removes the constraint of transistors and opens up a new array of technologies, including molecular positioning.

Will Moore’s law hold for another decade or another six hundred years? No one really knows the answer. Most people believe that eventually the trend will end, but when and why remain unanswered questions. If it does end, and Moore’s law no longer applies, another question emerges.

What Will Replace Moore’s Law?

Ray Kurzweil views Moore’s law in much the same way we defined it, not tied to specific technologies but rather as a “paradigm to forecast accelerating price-performance ratios.” From Kurzweil’s viewpoint:

 Moore’s law of Integrated Circuits was not the first, but the fifth paradigm to forecast accelerating price-performance ratios. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to [Newman’s] relay-based “[Heath] Robinson” machine that cracked the Lorenz cipher, to the CBS vacuum tube computer that predicted the election of Eisenhower, to the transistor-based machines used in the first space launches, to the integrated circuit-based personal computer. (Raymond Kurzweil, “The Law of Accelerating Returns,” www.KurzweilAI.net)

In the wider sense, Moore’s law is not about transistors or specific technologies. In my opinion it is a paradigm related to humankind’s creativity. The new computers following Moore’s law may be based on some new type of technology (e.g., optical computers, quantum computers, DNA computing) that bears little to no resemblance to current integrated-circuit technology. It appears that what Moore really uncovered was humankind’s ability to cost-effectively accelerate technology performance.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A pair of headphones hangs in front of a glowing red and white "ON AIR" sign in a radio studio.

Del Monte Radio Interview: The Artificial Intelligence Revolution

This is a recording of my discussion with John Counsell on “Late Night Counsell” AM580/CFRA Ottawa. The discussion was based on my new book, The Artificial Intelligence Revolution (2014). You can listen to the interview and call-ins by clicking to listen to the June 26, 2014 show at this URL:http://tunein.com/radio/Late-Night-Counsell-p50752. The page archives the recent “Late Night Counsell” shows. To listen to my interview click on the June 26, 2014 show.