All posts by admin

AI is approaching human intelligence

Artificial Intelligence Is Approaching Human Intelligence

According to Moore’s law, computer-processing power doubles every eighteen months. Using Moore’s law and simple mathematics suggest that in ten years, the processing power of our personal computers will be over a hundred times greater than the computers we currently are using. Military and consumer products using top-of-the-line computers running state-of-the-art AI software will likely exceed our desktop computer performance by factors of ten. In effect, artificial intelligence in top-of-the-line computers running state-of-the-art AI software will eventually be equivalent to and may actually exceed human intelligence.

Given the above, let us ask, “What should we expect from AI technology in ten years?” Here are some examples:

·       In military systems, expect autonomous weapons, including fighter drones, robotic Navy vessels, and robotic tanks.

·       In consumer products, expect personal computers that become digital assistants and even digital friends. Expect to be able to add “driverless” as an option to the car you buy. Expect productivity to increase by factors of ten in every human endeavor, as strong AI shoulders the “heavy lifting.”

·       In medical technology, expect surgical systems, like the da Vinci Surgical System, robotic platforms designed to expand the surgeon’s capabilities and offer a state-of-the-art minimally invasive option for major surgery, to become completely autonomous. Also, expect serious, if not life-threatening, technical issues as the new surgical systems are introduced, similar to the legal issues that plagued the da Vinci Surgical System, from 2012 through 2014. Expect prosthetic limbs to be directly connected to your brain via your nervous system and perform as well as the organic limb it replaced. Expect new pharmaceutical products that cure (not just treat) cancer and Alzheimer’s disease. Expect human life expectancy to increase by decades. Expect to see brain implants (i.e., technology that is implanted into the brain) become common, such as brain implants to rehabilitate stroke victims, by bypassing the damaged area of the brain.

·       On the world stage, expect cybercrime and cyber terrorism to become the number one issue that technologically advanced countries like the United States will have to fight. Expect significant changes in employment. When robots, embedded with strong AI computers can do the work currently performed by humans, it is not clear what type of work humans will do. Expect leisure to increase dramatically. Expect unemployment issues.       

The above examples are just the tip of a mile-long spear and highly likely to become realities. Most of what I cited is already off the drawing boards and being tested. AI is dramatically changing our lives already, and I project it will approach human intelligence in the next ten years. This is arguably optimistic. However, the majority of researchers project AI will be equivalent to human intelligence by mid-2021. Therefore, expect AI to be equivalent to human intelligence between 2030-2050.

Integrated Circuit

How Moore’s Law Ended the Second AI Winter

In our last post, I stated, “While AI as a field of research experienced funding surges and recessions, the infrastructure that ultimately fuels AI, integrated circuits, and computer software continued to follow Moore’s law. In the next post, we’ll discuss Moore’s law and its role in ending the second AI Winter.” This post will describe how Moore’s law ended the second AI Winter.

Intel co-founder Gordon E. Moore was the first to note a peculiar trend: the number of components in integrated circuits had doubled every year from the 1958 invention of the integrated circuit until 1965. In 1970 Caltech professor, VLSI (i.e., Very-Large-Scale Integration) pioneer, and entrepreneur Carver Mead coined the term “Moore’s law,” referring to Gordon E. Moore’s observation, and the phrase caught on within the scientific community. In 1975, Moore revised his prediction regarding the number of components in integrated circuits doubling every year to doubling every two years. Intel executive David House noted that Moore’s latest prediction would cause computer performance to double every eighteen months due to the combination of more transistors and the transistors themselves becoming faster.

This means that while the research field of AI experienced surges and recessions, the fundamental building blocks of AI, namely integrated circuit computer components, continued their exponential growth. Even today, Moore’s law is still applicable. In fact, many semiconductor companies use Moore’s law to plan their long-term product offerings. There is a deeply held belief in the semiconductor industry that they must adhere to Moore’s law must remain competitive. In effect, it has become a self-fulfilling prophecy.

In the strictest sense, Moore’s law is not a physical law of science. Rather, it delineates a trend or a general rule. This begs a question, “How long will Moore’s law continue to apply?” For approximately the last half-century, each estimate has predicted that Moore’s law would hold for another decade at various points in time. This has been occurring for almost five decades. I worked in the semiconductor industry for more than thirty years and over 20 years as a director of engineering for Honeywell’s Solid State Electronics Center, which developed and manufactured state-of-the-art integrated circuits for computers, missiles, and satellites. As a director of engineering, I was responsible for developing some of the world’s most sophisticated integrated circuits and sensors. During my over thirty years in the semiconductor industry, Moore’s law always appeared as if it would reach an impenetrable barrier. This, however, did not happen. New technologies constantly seemed to provide a stay of execution. We know that the trend may change at some point, but no one really has made a definitive case as to when this trend will end. The difficulty in predicting the end has to do with how one interprets Moore’s law. In my judgment, Moore’s law is not about integrated circuits, but rather it is an observation about human creativity as it relates to technology development. In fact, American author and Google’s director of engineering, Ray Kurzweil, showed via historical analysis that technological change is exponential. He termed this “The Law of Accelerating Returns” (Reference: The Age of Spiritual Machines, 1999, Ray Kurzweil).

As computer hardware and software continued its relentless exponential improvement, the AI field focused its development on “intelligent agents” or, as it often referred to, “smart agents.” The smart agent is a system that interacts with its environment and takes calculated actions to achieve its goal. Smart agents also can be combined to form multi-agent systems, with a hierarchical control system to bridge lower-level AI systems to higher-level AI systems. This became the game-changer. Using smart agents, AI technology has equal and exceed human intelligence in specific areas, such as playing chess. However, the current state of AI technology still falls short of general human intelligence, but this will change in the coming decades. We’ll discuss this further in the next post.

A large piece of ice on the beach

What Caused the Second “AI Winter”?

In our last post, we stated, “When the early founders of AI set extremely high expectations, they invited scrutiny. With the passing years, it became obvious that the reality of artificial intelligence fell short of their predictions. In 1974, funding for AI research began to dry up, both in the United States and Britain, which led to a period called the ‘AI Winter,’ and optimism regarding AI turned to skepticism. The first AI Winter lasted until the early 1980s.”

In early the 1980s, researchers in AI began to abandon the monumental task of developing strong AI and began to focus on expert systems. An expert system, in this context, is a computer system that emulates the decision-making ability of a human expert. This meant the computer software allowed the machine to “think” equivalently to an expert in a specific field, like chess for example. Expert systems became a highly successful development path for AI. By the mid-1980s, the funding faucet for AI research was flowing at more than a billion dollars per year.

Unfortunately, the funding faucet began to run dry again by 1987, starting with the failure of the Lisp machine market that same year. MIT AI lab programmers Richard Greenblatt and Thomas Knight, who formed the company Lisp Machines Inc., developed the Lisp machine in 1973. The Lisp machine was the first commercial, single-user, high-end microcomputer, which used Lisp programming (a specific high-level programming language) to tackle specific technical applications.

Lisp machines pioneered many commonplace technologies, including laser printing, windowing systems and high-resolution bit-mapped graphics, to name a few. However, the market reception for these machines was dismal, with only about seven thousand units sold by 1988, at about $70,000 per machine. In addition, the company, Lisp Machines Inc., suffered from severe internal politics regarding how to improve its market position. This internal strife caused divisions in the company. To make matters worse, cheaper desktop PCs soon were able to run Lisp programs even faster than Lisp machines. Most companies that produced Lisp machines went out of business by 1990, which led to a second and longer-lasting AI Winter.

If you are getting the impression that being an AI researcher from the 1960s through the late 1990s was akin to riding a roller coaster, your impression is correct. Life for AI researchers during that timeframe was a feast or famine-type existence.

While AI as a field of research experienced funding surges and recessions, the infrastructure that ultimately fuels AI, integrated circuits, and computer software, continued to follow Moore’s law. In the next post, we’ll discuss Moore’s law and its role in ending the second AI Winter.

A view of the mountains from above.

What Caused the First “AI Winter”?

The real science of artificial intelligence (AI) began with a small group of researchers, John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon. In 1956, these researchers founded the field of artificial intelligence at a conference held at Dartmouth College. Their work and their students’ work soon amazed the world, as their computer programs taught computers to solve algebraic word problems, provide logical theorems, and even speak English.

By the mid-1960s, the Department of Defense began pouring money into AI research. Along with this funding, unprecedented optimism and expectations regarding the capabilities of AI technology became common. In 1965, Dartmouth’s Herbert Simon helped fuel the unprecedented optimism and expectations by predicting, “Machines will be capable, within twenty years, of doing any work a man can do.” In 1967, Minsky not only agreed but also added, “Within a generation…the problem of creating ‘artificial intelligence’ will substantially be solved.”

Had the early founders been correct in their predictions, all human toil would have ceased by now, and our civilization would be a compendium of technological wonder. It is possible to speculate that every person would have a robotic assistant to ease their way through their daily chores, including cleaning their houses, driving them to any destination, and anything else that fills our daily lives with toil. However, as you know, that is not the case.

Obviously, Simon and Minsky had grossly underestimated the level of hardware and software required to achieve AI that replicates the intelligence of a human brain (i.e., strong artificial intelligence). Strong AI is also synonymous with general AI. Unfortunately, underestimating the level of hardware and software required to achieve strong artificial intelligence continues to plague AI research even today.

When the early founders of AI set extremely high expectations, they invited scrutiny. With the passing years, it became obvious that the reality of artificial intelligence fell short of their predictions. In 1974, funding for AI research began to dry up, both in the United States and Britain, which led to a period called the “AI Winter,” and optimism regarding AI turned to skepticism.

The first AI Winter lasted until the early 1980s. In the next post, we’ll discuss the second AI Winter.

c-war

The Pace Of Warfare Is Increasing From Hyperwar To C-War

In my latest book, War At The Speed Of Light, I coined a new term, “c-war.” This is an excerpt from the book’s introduction and explains the rationale behind this term.

The pace of warfare is accelerating. In fact, according to the Brookings Institution, a nonprofit public policy organization, “So fast will be this process [command and control decision-making], especially if coupled to automatic decisions to launch artificially intelligent autonomous weapons systems capable of lethal outcomes, that a new term has been coined specifically to embrace the speed at which war will be waged: hyperwar.”

The term “hyperwar” adequately describes the quickening pace of warfare resulting from the inclusion of AI into the command, control, decision-making, and weapons of war. However, to my mind, it fails to capture the speed of conflict associated with directed energy weapons. To be all-inclusive, I would like to suggest the term “c-war.” In Einstein’s famous mass-energy equivalent equation, E = mc2, the letter “c” is used to denote the speed of light in a vacuum. [For completeness, E means energy and m mass.] Surprisingly, the speed of light in the Earth’s atmosphere is almost equal to its velocity in a vacuum. On this basis, I believe c-war more fully captures the new pace of warfare associated with directed energy weapons.