Artificial intelligence (AI) surrounds us. However, much the same way we seldom read billboards as we drive, we seldom recognize AI. Even though we use technology, like our car GPS to get directions, we do not recognize that at its core is AI. Our phones use AI to remind us of appointments or engage us in a game of chess. However, we seldom, if ever, use the phrase “artificial intelligence.” Instead, we use the term “smart.” This is not the result of some master plans by the technology manufacturers. It is more of a statement regarding the status of the technology.
By the late 1990s through the early part of the twenty-first century, AI research began its resurgence. Smart agents found new applications in logistics, data mining, medical diagnosis, and numerous areas throughout the technology industry. Several factors led to this success:
- Computer hardware computational power was now getting closer to that of a human brain (i.e., in the best case about 10 to 20 percent of a human brain).
- Engineers placed emphasis on solving specific problems that did not require AI to be as flexible as a human brain.
New ties between AI and other fields working on similar problems were forged. AI was definitely on the upswing. AI itself, however, was not in the spotlight. It lay cloaked within the application, and a new phrase found its way into our vocabulary: the “smart (fill in the blank)”—for example, we say the “smartphone.”
AI is now all around us, in our phones, computers, cars, microwave ovens, and almost any commercial or military system labeled “smart.” According to Nick Bostrom, a University of Oxford philosopher known for his work on superintelligence risks, “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore” (“AI Set to Exceed Human Brainpower,” CNN.com, July 26, 2006). Ray Kurzweil agrees. He said, “Many thousands of AI applications are deeply embedded in the infrastructure of every industry” (Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology Funding [2005]). The above makes two important points:
- AI is now part of every aspect of human endeavor, from consumer goods to weapons of war, but the applications are seldom credited to AI.
- Both government and commercial applications now broadly underpin AI funding.
AI startups raised $73.4 billion in total funding in 2020 according to data gathered by StockApps.com. Well-established companies like Google are spending tens of billions on AI infrastructure. Google has also spent hundreds of millions on secondary AI business pursuits, such as driverless cars, wearable technology (Google Glass), humanlike robotics, high-altitude Internet broadcasting balloons, contact lenses that monitor glucose in tears, and even an effort to solve death.
In essence, the fundamental trend in both consumer and military AI systems is toward complete autonomy. Today, for example, one in every three US fighter aircraft is a drone. Today’s drones are under human control, but the next generation of fighter drones will be almost completely autonomous. Driverless cars, now a novelty, will become common. You may find this difficult or even impossible to believe. However, look at today’s AI applications. The US Navy plans to deploy unmanned surface vehicles (USVs) to not only protect navy ships but also, for the first time, to autonomously “swarm” offensively on hostile vessels. In my latest book, War At The Speed Of Light, I devoted a chapter to autonomous directed energy weapons. Here is an excerpt:
The reason for building autonomous directed energy weapons is identical to those regarding other autonomous weapons. According to Military Review, the professional journal of the US Army, “First, autonomous weapons systems act as a force multiplier. That is, fewer warfighters are needed for a given mission, and the efficacy of each warfighter is greater. Next, advocates credit autonomous weapons systems with expanding the battlefield, allowing combat to reach into areas that were previously inaccessible. Finally, autonomous weapons systems can reduce casualties by removing human warfighters from dangerous missions.
What is making this all possible? It is the relentless exponential growth in computer performance. According to Moore’s law, computer-processing power doubles every eighteen months. Using Moore’s law and simple mathematics suggests that in ten years, the processing power of our personal computers will be more than a hundred times greater than the computers we currently are using. Military and consumer products using top-of-the-line computers running state-of-the-art AI software will likely exceed our desktop computer performance by factors of ten. In effect, artificial intelligence in top-of-the-line computers running state-of-the-art AI software may be equivalent to human intelligence. However, will it be equivalent to human judgment? I fear not, and autonomous weapons may lead to unintended conflicts, conceivably even World War III.
I recognize this last paragraph represents dark speculations on my part. Therefore, let me ask you, What do you think?