AI research funding was a roller-coaster ride from the mid-1960s through about the mid-1990s, experiencing incredible highs and lows. By the late 1990s through the early part of the twenty-first century, however, AI research began a resurgence, finding new applications in logistics, data mining, medical diagnosis, and numerous areas throughout the technology industry. Several factors led to this success.

Computer hardware computational power was now getting closer to that of a human brain (i.e., in the best case about 10 to 20 percent of a human brain).

  • Engineers placed emphasis on solving specific problems that did not require AI to be as flexible as a human brain.
  • New ties between AI and other fields working on similar problems were forged. AI was definitely on the upswing. AI itself, however, was not being spotlighted. It was now cloaked behind the application, and a new phrase found its way into our vocabulary: the “smart (fill in the blank)”—for example the “smartphone.” Here are some of the more visible accomplishments of AI over the last fifteen years.
    • In 1997 IBM’s chess-playing computer Deep Blue became the first computer to beat world-class chess champion Garry Kasparov. In a six-game match, Deep Blue prevailed by two wins to one, with three draws. Until this point no computer had been able to beat a chess grand master. This win garnered headlines worldwide and was a milestone that embedded the reality of AI into the consciousness of the average person.
    • In 2005 a robot conceived and developed at Stanford University was able to drive autonomously for 131 miles along an unrehearsed desert trail, winning the DARPA Grand Challenge (the government’s Defense Advanced Research Projects Agency prize for a driverless vehicle).
    • In 2007 Boss, Carnegie Mellon University’s self-driving SUV, made history by swiftly and safely driving fifty-five miles in an urban setting while sharing the road with human drivers and won the DARPA Urban Challenge.
    • In 2010 Microsoft launched the Kinect motion sensor, which provides a 3-D body-motion interface for Xbox 360 games and Windows PCs. According to Guinness World Records since 2000, the Kinect holds the record for the “fastest-selling consumer electronics device” after selling eight million units in its first sixty days (in the early part of 2011). By January 2012 twenty-four million Kinect sensors had been shipped.
    • In 2011, on an exhibition match on the popular TV quiz show Jeopardy!, an IBM computer named Watson defeated Jeopardy!’s greatest champions, Brad Rutter and Ken Jennings.
    • In 2010 and 2011, Apple made Siri voice-recognition software available in the Apple app store for various applications, such as integrating it with Google Maps. In the latter part of 2011, Apple integrated Siri into the iPhone 4S and removed the Siri application from its app store.
    • In 2012 “scientists at Universidad Carlos III in Madrid…presented a new technique based on artificial intelligence that can automatically create plans, allowing problems to be solved with much greater speed than current methods provide when resources are limited. This method can be applied in sectors such as logistics, autonomous control of robots, fire extinguishing and online learning” (www.phys.org, “A New Artificial Intelligence Technique to Speed the Planning of Tasks When Resources Are Limited”).

The above list shows just some of the highlights. AI is now all around us—in our phones, computers, cars, microwave ovens, and almost any consumer or commercial electronic systems labeled “smart.” Funding is no longer solely controlled by governments but is now being underpinned by numerous consumer and commercial applications.

The road to being an “expert system” or a “smart (anything)” focused on specific well-defined applications. By the first decade of the twenty-first century, expert systems had become commonplace. It became normal to talk to a computer when ordering a pharmaceutical prescription and to expect your smartphone/automobile navigation system to give you turn-by-turn directions to the pharmacy. AI clearly was becoming an indispensable element of society in highly developed countries. One ingredient, however, continued to be missing. That ingredient was human affects (i.e., the feeling and expression of human emotions). If you called the pharmacy for a prescription, the AI program did not show any empathy. If you talked with a real person at the pharmacy, he or she likely would express empathy, perhaps saying something such as, “I’m sorry you’re not feeling well. We’ll get this prescription filled right away.” If you missed a turn on your way to the pharmacy while getting turn-by-turn directions from your smartphone, it did not get upset or scold you. It simply either told you to make a U-turn or calculated a new route for you.

While it became possible to program some rudimentary elements to emulate human emotions, the computer did not genuinely feel them. For example the computer program might request, “Please wait while we check to see if we have that prescription in stock,” and after some time say, “Thank you for waiting.” However, this was just rudimentary programming to mimic politeness and gratitude. The computer itself felt no emotion.

By the end of the first decade of the twenty-first century, AI slowly had worked its way into numerous elements of modern society. AI cloaked itself in expert systems, which became commonplace. Along with advances in software and hardware, our expectations continued to grow. Waiting thirty seconds for a computer program to do something seemed like an eternity. Getting the wrong directions from a smartphone rarely occurred. Indeed, with the advent of GPS (Global Positioning System, a space-based satellite navigation system), your smartphone gave you directions as well as the exact position of your vehicle and estimated how long it would take for you to arrive at your destination.

Those of us, like me, who worked in the semiconductor industry knew this outcome—the advances in computer hardware and the emergence of expert systems—was inevitable. Even consumers had a sense of the exponential progress occurring in computer technology. Many consumers complained that their new top-of-the-line computer soon would be a generation behind in as little as two years, meaning that the next generation of faster, more capable computers was available and typically selling at a lower price than their original computers.

This point became painfully evident to those of us in the semiconductor industry. For example, in the early 1990s, semiconductor companies bought their circuit designers workstations (i.e., computer systems that emulate the decision-making ability of a human-integrated circuit-design engineer), and they cost roughly $100,000 per workstation. In about two years, you could buy the same level of computing capability in the consumer market for a relatively small fraction of the cost. We knew this would happen because integrated circuits had been relentlessly following Moore’s law since their inception. What is Moore’s law? I’ll discuss this in the next post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Image: iStockPhoto.com (licensed)