While the phrase “artificial intelligence” is only about half a century old, the concept of intelligent thinking machines and artificial beings dates back to ancient times. For example the Greek myth “Talos of Crete” tells of a giant bronze man who protected Europa in Crete from pirates and invaders by circling the island’s shores three times daily. Ancient Egyptians and Greeks worshiped animated cult images and humanoid automatons. By the nineteenth and twentieth centuries, intelligent artificial beings became common in fiction. Perhaps the best-known work of fiction depicting this is Mary Shelley’s Frankenstein, first published anonymously in London in 1818 (Mary Shelley’s name appeared on the second edition, published in France in 1823). In addition the stories of these “intelligent beings” often spoke to the same hopes and concerns we currently face regarding artificial intelligence.
Logical reasoning, sometimes referred to as “mechanical reasoning,” also has ancient roots, at least dating back to classical Greek philosophers and mathematicians such as Pythagoras and Heraclitus. The concept that mathematical problems are solvable by following a rigorous logical path of reasoning eventually led to computer programming. Mathematicians such as British mathematician, logician, cryptanalyst, and computer scientist Alan Turing (1912–1954) suggested that a machine could simulate any mathematical deduction by using “0” and “1” sequences (binary code).
The Birth of Artificial Intelligence
Discoveries in neurology, information theory, and cybernetics inspired a small group of researchers—including John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon—to begin to consider the possibility of building an electronic brain. In 1956 these researchers founded the field of artificial intelligence at a conference held at Dartmouth College. Their work—and the work of their students—soon amazed the world, as their computer programs taught computers to solve algebraic word problems, provide logical theorems, and even speak English.
AI research soon caught the eye of the US Department of Defense (DOD), and by the mid-1960s, the DOD was heavily funding AI research. Along with this funding came a new level of optimism. At that time Dartmouth’s Herbert Simon predicted, “Machines will be capable, within twenty years, of doing any work a man can do,” and Minsky not only agreed but also added that “within a generation…the problem of creating ‘artificial intelligence’ will substantially be solved.”
Obviously both had underestimated the level of hardware and software required for replicating the intelligence of a human brain. By setting extremely high expectations, however, they invited scrutiny. With the passing years, it became obvious that the reality of artificial intelligence fell short of their predictions. In 1974 funding for AI research began to dry up, both in the United States and Britain, which led to a period called the “AI winter.”
In the early 1980s, AI research began to resurface with the success of expert systems, computer systems that emulate the decision-making ability of a human expert. This meant the computer software was programmed to “think” like an expert in a specific field rather than follow the more general procedure of a software developer, which is the case in conventional programming. By 1985 the funding faucet for AI research was reinitiated and soon flowing at more than a billion dollars per year.
However, the faucet again began to run dry by 1987, starting with the failure of the Lisp machine market that same year. The Lisp machine was developed in 1973 by MIT AI lab programmers Richard Greenblatt and Thomas Knight, who formed the company Lisp Machines Inc. This machine was the first commercial, single-user, high-end microcomputer and used Lisp programming (a specific high-level programming language). In a sense it was the first commercial, single-user workstation (i.e., an extremely advanced computer) designed for technical and scientific applications.
Although Lisp machines pioneered many commonplace technologies, including laser printing, windowing systems, computer mice, and high-resolution bit-mapped graphics, to name a few, the market reception for these machines was dismal, with only about seven thousand units sold by 1988, at a price of about $70,000 per machine. In addition Lisp Machines Inc. suffered from severe internal politics regarding how to improve its market position, which caused divisions in the company. To make matters worse, cheaper desktop PCs soon were able to run Lisp programs even faster than Lisp machines. Most companies that produced Lisp machines went out of business by 1990, which led to a second and longer-lasting AI winter.
In the second segment of this post we will discuss: Hardware Plus Software Synergy
Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte