The real science of artificial intelligence (AI) began with a small group of researchers, John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon. In 1956, these researchers founded the field of artificial intelligence at a conference held at Dartmouth College. Their work and their students’ work soon amazed the world, as their computer programs taught computers to solve algebraic word problems, provide logical theorems, and even speak English.

By the mid-1960s, the Department of Defense began pouring money into AI research. Along with this funding, unprecedented optimism and expectations regarding the capabilities of AI technology became common. In 1965, Dartmouth’s Herbert Simon helped fuel the unprecedented optimism and expectations by predicting, “Machines will be capable, within twenty years, of doing any work a man can do.” In 1967, Minsky not only agreed but also added, “Within a generation…the problem of creating ‘artificial intelligence’ will substantially be solved.”

Had the early founders been correct in their predictions, all human toil would have ceased by now, and our civilization would be a compendium of technological wonder. It is possible to speculate that every person would have a robotic assistant to ease their way through their daily chores, including cleaning their houses, driving them to any destination, and anything else that fills our daily lives with toil. However, as you know, that is not the case.

Obviously, Simon and Minsky had grossly underestimated the level of hardware and software required to achieve AI that replicates the intelligence of a human brain (i.e., strong artificial intelligence). Strong AI is also synonymous with general AI. Unfortunately, underestimating the level of hardware and software required to achieve strong artificial intelligence continues to plague AI research even today.

When the early founders of AI set extremely high expectations, they invited scrutiny. With the passing years, it became obvious that the reality of artificial intelligence fell short of their predictions. In 1974, funding for AI research began to dry up, both in the United States and Britain, which led to a period called the “AI Winter,” and optimism regarding AI turned to skepticism.

The first AI Winter lasted until the early 1980s. In the next post, we’ll discuss the second AI Winter.