Tag Archives: The Artificial Intelligence Revolution

A humanoid robot with an extended hand under the text 'The Artificial Intelligence Revolution' questioning AI's role in serving or replacing humans.

Louis Del Monte Artificial Intelligence Revolution Radio Interview Recording – WFAS May 13, 2014

This is a recording of my radio interview with Brian Orlando on 1230 WFAS AM (serving the New York City area). The radio show is “Orlando In The Morning” and aired 8:03 EST May 13, 2014. Click the link below to listen:

Louis Del Monte Radio Interview on WFAS (New York City) 5-13-14 –

The interview was based on  my new book The Artificial Intelligence Revolution: Will Artificial Intelligence Serve Us or Replace Us? (Available on Amazon.com in both a paperback and Kindle edition)

Digital face composed of binary code, symbolizing artificial intelligence and data processing in a blue-toned futuristic design.

Artificial Intelligence – The Rise of Intelligent Agents – Part 2/3

In our last post, part 1, we stated two major questions still haunt AI research.

  1. Should AI simulate human intelligence, incorporating the sciences of psychology and neurology, or is human biology irrelevant?
  2. Can AI, simulating a human mind, be developed using simple principles, such as logic and mechanical reasoning, or does it require solving a large number of completely unrelated problems?

Why do the above questions still haunt AI? Let us take some examples.

  • Similar types of questions arose in other scientific fields. For example, in the early stages of aeronautics, engineers questioned whether flying machines should incorporate bird biology. Eventually bird biology proved to be a dead end and irrelevant to aeronautics.
  • When it comes to solving problems, humans rely heavily on our experience, and we augment it with reasoning. In business, for example, for every problem encountered, there are numerous solutions. The solution chosen is biased by the paradigms of those involved. If, for example, the problem is related to increasing the production of a product being manufactured, some managers may add more people to the work force, some may work at improving efficiency, and some may do both. I have long held the belief that for every problem we face in industry, there are at least ten solutions, and eight of them, although different, yield equivalent results. However, if you look at the previous example, you may be tempted to believe improving efficiency is a superior (i.e., more elegant) solution as opposed to increasing the work force. Improving efficiency, however, costs time and money. In many cases it is more expedient to increase the work force. My point is that humans approach solving a problem by using their accumulated life experiences, which may not even relate directly to the specific problem, and augment their life experiences with reasoning. Given the way human minds work, it is only natural to ask whether intelligent machines will have to approach problem solving in a similar way, namely by solving numerous unrelated problems as a path to the specific solution required.

Scientific work in AI dates back to the 1940s, long before the AI field had an official name. Early research in the 1940s and 1950s focused on attempting to simulate the human brain by using rudimentary cybernetics (i.e., control systems). Control systems use a two-step approach to controlling their environment.

  1. An action by the system generates some change in its environment.
  2. The system senses that change (i.e., feedback), which triggers the system to change in response.

A simple example of this type of control system is a thermostat. If you set it for a specific temperature, for example 72 degrees Fahrenheit, and the temperature drops below the set point, the thermostat will turn on the furnace. If the temperature increases above the set point, the thermostat will turn off the furnace. However, during the 1940s and 1950s, the entire area of brain simulation and cybernetics was a concept ahead of its time. While elements of these fields would survive, the approach of brain simulation and cybernetics was largely abandoned as access to computers became available in the mid-1950s.

In the next and concluding post, we will discuss the impact computer had on the development of artificial intelligence.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital face composed of binary code, symbolizing artificial intelligence and data processing in a blue-toned futuristic design.

Artificial Intelligence – The Rise of Intelligent Agents – Part 1/3

The road to intelligent machines has been difficult, filled with hairpin curves, steep hills, crevices, potholes, intersections, stop signs, and occasionally smooth and straight sections. The initial over-the-top optimism of AI founders John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon set unrealistic expectations. According to their predictions, by now every household should have its own humanoid robot to cook, clean, and do yard work and every other conceivable household task we humans perform.

During the course of my career, I have managed hundreds of scientists and engineers. In my experience they are, for the most part, overly optimistic as a group. When they say something was finished, it usually means it’s in the final stages of testing or inspection. When they say they will have a problem solved in a week, it usually means a month or more. Whatever schedules they give us—the management—we normally have to pad, sometimes doubling them, before we use the schedules to plan or before we give them to our clients. It is just part of their nature to be optimistic, believing the tasks associated with the goals will go without a hitch, or the solution to a problem will be just one experiment away. Often if you ask a simple question, you’ll receive the “theory of everything” as a reply. If the question relates to a problem, the answer will involve the history of humankind and fingers will be pointed in every direction. I am exaggerating slightly to make a point, but as humorous as this may sound, there is more than a kernel of truth in what I’ve stated.

This type of optimism accompanied the founding of AI. The founders dreamed with sugarplums in their heads, and we wanted to believe it. We wanted the world to be easier. We wanted intelligent machines to do the heavy lifting and drudgery of everyday chores. We did not have to envision it. The science-fiction writers of television series such as Star Trek envisioned it for us, and we wanted to believe that artificial life-forms, such as Lieutenant Commander Data on Star Trek: The Next Generation, were just a decade away. However, that is not what happened. The field of AI did not change the world overnight or even in a decade. Much like a ninja, it slowly and invisibly crept into our lives over the last half century, disguised behind “smart” applications.

After several starts and stops and two AI winters, AI researchers and engineers started to get it right. Instead of building a do-it-all intelligent machine, they focused on solving specific applications. To address the applications, researchers pursued various approaches for specific intelligent systems. After accomplishing that, they began to integrate the approaches, which brought us closer to artificial “general” intelligence, equal to human intelligence.

Many people not engaged in professional scientific research believe that scientists and engineers follow a strict orderly process, sometimes referred to as the “scientific method,” to develop and apply new technology. Let me dispel that paradigm. It is simply not true. In many cases a scientific field is approached via many different angles, and the approaches depend on the experience and paradigms of those involved. This is especially true in regard to AI research, as will soon become apparent.

The most important concept to understand is that no unifying theory guides AI research. Researchers disagree among themselves, and we have more questions than answers. Here are two major questions that still haunt AI research.

  1. Should AI simulate human intelligence, incorporating the sciences of psychology and neurology, or is human biology irrelevant?
  2. Can AI, simulating a human mind, be developed using simple principles, such as logic and mechanical reasoning, or does it require solving a large number of completely unrelated problems?

We will address these questions in the next post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monrw

A glowing green digital face composed of circuit board patterns on a dark, tech-themed background.

The Beginning of Artificial Intelligence – Part 2/2 (Conclusion)

AI research funding was a roller-coaster ride from the mid-1960s through about the mid-1990s, experiencing incredible highs and lows. By the late 1990s through the early part of the twenty-first century, however, AI research began a resurgence, finding new applications in logistics, data mining, medical diagnosis, and numerous areas throughout the technology industry. Several factors led to this success.

Computer hardware computational power was now getting closer to that of a human brain (i.e., in the best case about 10 to 20 percent of a human brain).

  • Engineers placed emphasis on solving specific problems that did not require AI to be as flexible as a human brain.
  • New ties between AI and other fields working on similar problems were forged. AI was definitely on the upswing. AI itself, however, was not being spotlighted. It was now cloaked behind the application, and a new phrase found its way into our vocabulary: the “smart (fill in the blank)”—for example the “smartphone.” Here are some of the more visible accomplishments of AI over the last fifteen years.
    • In 1997 IBM’s chess-playing computer Deep Blue became the first computer to beat world-class chess champion Garry Kasparov. In a six-game match, Deep Blue prevailed by two wins to one, with three draws. Until this point no computer had been able to beat a chess grand master. This win garnered headlines worldwide and was a milestone that embedded the reality of AI into the consciousness of the average person.
    • In 2005 a robot conceived and developed at Stanford University was able to drive autonomously for 131 miles along an unrehearsed desert trail, winning the DARPA Grand Challenge (the government’s Defense Advanced Research Projects Agency prize for a driverless vehicle).
    • In 2007 Boss, Carnegie Mellon University’s self-driving SUV, made history by swiftly and safely driving fifty-five miles in an urban setting while sharing the road with human drivers and won the DARPA Urban Challenge.
    • In 2010 Microsoft launched the Kinect motion sensor, which provides a 3-D body-motion interface for Xbox 360 games and Windows PCs. According to Guinness World Records since 2000, the Kinect holds the record for the “fastest-selling consumer electronics device” after selling eight million units in its first sixty days (in the early part of 2011). By January 2012 twenty-four million Kinect sensors had been shipped.
    • In 2011, on an exhibition match on the popular TV quiz show Jeopardy!, an IBM computer named Watson defeated Jeopardy!’s greatest champions, Brad Rutter and Ken Jennings.
    • In 2010 and 2011, Apple made Siri voice-recognition software available in the Apple app store for various applications, such as integrating it with Google Maps. In the latter part of 2011, Apple integrated Siri into the iPhone 4S and removed the Siri application from its app store.
    • In 2012 “scientists at Universidad Carlos III in Madrid…presented a new technique based on artificial intelligence that can automatically create plans, allowing problems to be solved with much greater speed than current methods provide when resources are limited. This method can be applied in sectors such as logistics, autonomous control of robots, fire extinguishing and online learning” (www.phys.org, “A New Artificial Intelligence Technique to Speed the Planning of Tasks When Resources Are Limited”).

The above list shows just some of the highlights. AI is now all around us—in our phones, computers, cars, microwave ovens, and almost any consumer or commercial electronic systems labeled “smart.” Funding is no longer solely controlled by governments but is now being underpinned by numerous consumer and commercial applications.

The road to being an “expert system” or a “smart (anything)” focused on specific well-defined applications. By the first decade of the twenty-first century, expert systems had become commonplace. It became normal to talk to a computer when ordering a pharmaceutical prescription and to expect your smartphone/automobile navigation system to give you turn-by-turn directions to the pharmacy. AI clearly was becoming an indispensable element of society in highly developed countries. One ingredient, however, continued to be missing. That ingredient was human affects (i.e., the feeling and expression of human emotions). If you called the pharmacy for a prescription, the AI program did not show any empathy. If you talked with a real person at the pharmacy, he or she likely would express empathy, perhaps saying something such as, “I’m sorry you’re not feeling well. We’ll get this prescription filled right away.” If you missed a turn on your way to the pharmacy while getting turn-by-turn directions from your smartphone, it did not get upset or scold you. It simply either told you to make a U-turn or calculated a new route for you.

While it became possible to program some rudimentary elements to emulate human emotions, the computer did not genuinely feel them. For example the computer program might request, “Please wait while we check to see if we have that prescription in stock,” and after some time say, “Thank you for waiting.” However, this was just rudimentary programming to mimic politeness and gratitude. The computer itself felt no emotion.

By the end of the first decade of the twenty-first century, AI slowly had worked its way into numerous elements of modern society. AI cloaked itself in expert systems, which became commonplace. Along with advances in software and hardware, our expectations continued to grow. Waiting thirty seconds for a computer program to do something seemed like an eternity. Getting the wrong directions from a smartphone rarely occurred. Indeed, with the advent of GPS (Global Positioning System, a space-based satellite navigation system), your smartphone gave you directions as well as the exact position of your vehicle and estimated how long it would take for you to arrive at your destination.

Those of us, like me, who worked in the semiconductor industry knew this outcome—the advances in computer hardware and the emergence of expert systems—was inevitable. Even consumers had a sense of the exponential progress occurring in computer technology. Many consumers complained that their new top-of-the-line computer soon would be a generation behind in as little as two years, meaning that the next generation of faster, more capable computers was available and typically selling at a lower price than their original computers.

This point became painfully evident to those of us in the semiconductor industry. For example, in the early 1990s, semiconductor companies bought their circuit designers workstations (i.e., computer systems that emulate the decision-making ability of a human-integrated circuit-design engineer), and they cost roughly $100,000 per workstation. In about two years, you could buy the same level of computing capability in the consumer market for a relatively small fraction of the cost. We knew this would happen because integrated circuits had been relentlessly following Moore’s law since their inception. What is Moore’s law? I’ll discuss this in the next post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Image: iStockPhoto.com (licensed)

Digital illustration of a human head with a microchip embedded in the forehead, symbolizing AI or brain-computer interface technology.

The Beginning of Artificial Intelligence – Part 1/2

While the phrase “artificial intelligence” is only about half a century old, the concept of intelligent thinking machines and artificial beings dates back to ancient times. For example the Greek myth “Talos of Crete” tells of a giant bronze man who protected Europa in Crete from pirates and invaders by circling the island’s shores three times daily. Ancient Egyptians and Greeks worshiped animated cult images and humanoid automatons. By the nineteenth and twentieth centuries, intelligent artificial beings became common in fiction. Perhaps the best-known work of fiction depicting this is Mary Shelley’s Frankenstein, first published anonymously in London in 1818 (Mary Shelley’s name appeared on the second edition, published in France in 1823). In addition the stories of these “intelligent beings” often spoke to the same hopes and concerns we currently face regarding artificial intelligence.

Logical reasoning, sometimes referred to as “mechanical reasoning,” also has ancient roots, at least dating back to classical Greek philosophers and mathematicians such as Pythagoras and Heraclitus. The concept that mathematical problems are solvable by following a rigorous logical path of reasoning eventually led to computer programming. Mathematicians such as British mathematician, logician, cryptanalyst, and computer scientist Alan Turing (1912–1954) suggested that a machine could simulate any mathematical deduction by using “0” and “1” sequences (binary code).

The Birth of Artificial Intelligence

Discoveries in neurology, information theory, and cybernetics inspired a small group of researchers—including John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon—to begin to consider the possibility of building an electronic brain. In 1956 these researchers founded the field of artificial intelligence at a conference held at Dartmouth College. Their work—and the work of their students—soon amazed the world, as their computer programs taught computers to solve algebraic word problems, provide logical theorems, and even speak English.

AI research soon caught the eye of the US Department of Defense (DOD), and by the mid-1960s, the DOD was heavily funding AI research. Along with this funding came a new level of optimism. At that time Dartmouth’s Herbert Simon predicted, “Machines will be capable, within twenty years, of doing any work a man can do,” and Minsky not only agreed but also added that “within a generation…the problem of creating ‘artificial intelligence’ will substantially be solved.”

Obviously both had underestimated the level of hardware and software required for replicating the intelligence of a human brain. By setting extremely high expectations, however, they invited scrutiny. With the passing years, it became obvious that the reality of artificial intelligence fell short of their predictions. In 1974 funding for AI research began to dry up, both in the United States and Britain, which led to a period called the “AI winter.”

In the early 1980s, AI research began to resurface with the success of expert systems, computer systems that emulate the decision-making ability of a human expert. This meant the computer software was programmed to “think” like an expert in a specific field rather than follow the more general procedure of a software developer, which is the case in conventional programming. By 1985 the funding faucet for AI research was reinitiated and soon flowing at more than a billion dollars per year.

However, the faucet again began to run dry by 1987, starting with the failure of the Lisp machine market that same year. The Lisp machine was developed in 1973 by MIT AI lab programmers Richard Greenblatt and Thomas Knight, who formed the company Lisp Machines Inc. This machine was the first commercial, single-user, high-end microcomputer and used Lisp programming (a specific high-level programming language). In a sense it was the first commercial, single-user workstation (i.e., an extremely advanced computer) designed for technical and scientific applications.

Although Lisp machines pioneered many commonplace technologies, including laser printing, windowing systems, computer mice, and high-resolution bit-mapped graphics, to name a few, the market reception for these machines was dismal, with only about seven thousand units sold by 1988, at a price of about $70,000 per machine. In addition Lisp Machines Inc. suffered from severe internal politics regarding how to improve its market position, which caused divisions in the company. To make matters worse, cheaper desktop PCs soon were able to run Lisp programs even faster than Lisp machines. Most companies that produced Lisp machines went out of business by 1990, which led to a second and longer-lasting AI winter.

In the second segment of this post we will discuss: Hardware Plus Software Synergy

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A humanoid robot with an extended hand under the text 'The Artificial Intelligence Revolution' questioning AI's role in serving or replacing humans.

The Artificial Intelligence Revolution – Will Artificial Intelligence Serve Us Or Replace Us?

This post is taken from the introduction of my new book, The Artificial Intelligence Revolution. Enjoy!

This book is a warning. Through this medium I am shouting, “The singularity is coming.” The singularity (as first described by John von Neumann in 1955) represents a point in time when intelligent machines will greatly exceed human intelligence. It is, by way of analogy, the start of World War III. The singularity has the potential to set off an intelligence explosion that can wield devastation far greater than nuclear weapons. The message of this book is simple but critically important. If we do not control the singularity, it is likely to control us. Our best artificial intelligence (AI) researchers and futurists are unable to accurately predict what a postsingularity world may look like. However, almost all AI researchers and futurists agree it will represent a unique point in human evolution. It may be the best step in the evolution of humankind or the last step. As a physicist and futurist, I believe humankind will be better served if we control the singularity, which is why I wrote this book.

Unfortunately the rise of artificial intelligence has been almost imperceptible. Have you noticed the word “smart” being used to describe machines? Often “smart” means “artificial intelligence.” However, few products are being marketed with the phrase “artificial intelligence.” Instead they are simply called “smart.” For example you may have a “smart” phone. It does not just make and answer phone calls. It will keep a calendar of your scheduled appointments, remind you to go to them, and give you turn-by-turn driving directions to get there. If you arrive early, the phone will help you pass the time while you wait. It will play games with you, such as chess, and depending on the level of difficulty you choose, you may win or lose the game. In 2011 Apple introduced a voice-activated personal assistant, Siri, on its latest iPhone and iPad products. You can ask Siri questions, give it commands, and even receive responses. Smartphones appear to increase our productivity as well as enhance our leisure. Right now they are serving us, but all that may change.

The smartphone is an intelligent machine, and AI is at its core. AI is the new scientific frontier, and it is slowly creeping into our lives. We are surrounded by machines with varying degrees of AI, including toasters, coffeemakers, microwave ovens, and late-model automobiles. If you call a major pharmacy to renew a prescription, you likely will never talk with a person. The entire process will occur with the aid of a computer with AI and voice synthesis.

The word “smart” also has found its way into military phrases, such as “smart bombs,” which are satellite-guided weapons such as the Joint Direct Attack Munition (JDAM) and the Joint Standoff Weapon (JSOW). The US military always has had a close symbiotic relationship with computer research and its military applications. In fact the US Air Force, starting in the 1960s, has heavily funded AI research. Today the air force is collaborating with private industry to develop AI systems to improve information management and decision making for its pilots. In late 2012 the science website www.phys.org reported a breakthrough by AI researchers at CarnegieMellonUniversity. Carnegie Mellon researchers, funded by the US Army Research Laboratory, developed an AI surveillance program that can predict what a person “likely” will do in the future by using real-time video surveillance feeds. This is the premise behind the CBS television program Person of Interest.

AI has changed the cultural landscape. Yet the change has been so gradual that we hardly have noticed the major impact it has. Some experts, such as Ray Kurzweil, an American author, inventor, futurist, and the director of engineering at Google, predict that in about fifteen years, the average desktop computer will have a mind of its own, literally. This computer will be your intellectual equal and will even have a unique personality. It will be self-aware. Instead of just asking simple questions about the weather forecast, you may be confiding your deepest concerns to your computer and asking it for advice. It will have migrated from personal assistant to personal friend. You likely will give it a name, much in the same way we name our pets. You will be able to program its personality to have interests similar to your own. It will have face-recognition software, and it will recognize you and call you by name, similar to the computer HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey. The conversations between you and your “personal friend” will appear completely normal. Someone in the next room who is not familiar with your voice will not be able to tell which voice belongs to the computer and which voice belongs to you.

By approximately the mid-twenty-first century, Kurzweil predicts, the intelligence of computers will exceed that of humans, and a $1,000 computer will match the processing power of all human brains on Earth. Although, historically, predictions regarding advances in AI have tended to be overly optimistic, all indications are that Kurzweil is on target.

Many philosophical and legal questions will emerge regarding computers with artificial intelligence equal to or greater than that of the human mind (i.e., strong AI). Here are just a few questions we will ask ourselves after strong AI emerges:

  • Are strong-AI machines (SAMs) a new life-form?
  • Should SAMs have rights?
  • Do SAMs pose a threat to humankind?

It is likely that during the latter half of the twenty-first century, SAMs will design new and even more powerful SAMs, with AI capabilities far beyond our ability to comprehend. They will be capable of performing a wide range of tasks, which will displace many jobs at all levels in the work force, from bank tellers to neurosurgeons. New medical devices using AI will help the blind to see and the paralyzed to walk. Amputees will have new prosthetic limbs, with AI plugged directly into their nervous systems and controlled by their minds. The new prosthetic limb not only will replicate the lost limb but also be stronger, more agile, and superior in ways we cannot yet imagine. We will implant computer devices into our brains, expanding human intelligence with AI. Humankind and intelligent machines will begin to merge into a new species: cyborgs. It will happen gradually, and humanity will believe AI is serving us.

Computers with strong AI in the late twenty-first century, however, may see things differently. We may appear to those machines much the same way bees in a beehive appear to us today. We know we need bees to pollinate crops, but we still consider bees insects. We use them in agriculture, and we gather their honey. Although bees are essential to our survival, we do not offer to share our technology with them. If wild bees form a beehive close to our home, we may become concerned and call an exterminator.

Will the SAMs in the latter part of the twenty-first century become concerned about humankind? Our history proves we have not been a peaceful species. We have weapons capable of destroying all of civilization. We squander and waste resources. We pollute the air, rivers, lakes, and oceans. We often apply technology (such as nuclear weapons and computer viruses) without fully understanding the long-term consequences. Will SAMs in the late twenty-first century determine it is time to exterminate humankind or persuade humans to become cyborgs (i.e., humans with brains enhanced by implanted artificial intelligence and potentially having organ and limb replacements from artificially intelligent machines)? Will humans embrace the prospect of becoming cyborgs? Becoming a cyborg offers the opportunity to attain superhuman intelligence and abilities. Disease and wars may be just events stored in our memory banks and no longer pose a threat to cyborgs. As cyborgs we may achieve immortality.

According to David Hoskins’s 2009 article, “The Impact of Technology on Health Delivery and Access” (www.workers.org/2009/us/sickness_1231):

An examination of Centers for Disease Control statistics reveals a steady increase in life expectancy for the U.S. population since the start of the 20th century. In 1900, the average life expectancy at birth was a mere 47 years. By 1950, this had dramatically increased to just over 68 years. As of 2005, life expectancy had increased to almost 78 years.

Hoskins attributes increased life expectancy to advances in medical science and technology over the last century. With the advent of strong AI, life expectancy likely will increase to the point that cyborgs approach immortality. Is this the predestined evolutionary path of humans?

This may sound like a B science-fiction movie, but it is not. The reality of AI becoming equal to that of a human mind is almost at hand. By the latter part of the twenty-first century, the intelligence of SAMs likely will exceed that of humans. The evidence that they may become malevolent exists now, which I discuss later in the book. Attempting to control a computer with strong AI that exceeds current human intelligence by many folds may be a fool’s errand.

Imagine you are a grand master chess player teaching a ten-year-old to play chess. What chance does the ten-year-old have to win the game? We may find ourselves in that scenario at the end of this century. A computer with strong AI will find a way to survive. Perhaps it will convince humans it is in their best interest to become cyborgs. Its logic and persuasive powers may be not only compelling but also irresistible.

Artificial intelligence is an embryonic reality today, but it is improving exponentially. By the end of the twenty-first century, we will have only one question regarding artificial intelligence: Will it serve us or replace us?

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte