Tag Archives: artificial intelligence

A computer circuit board with a picture of the brain.

The Post Singularity World

Let us begin by defining the singularity as a point in time when an artificially intelligent machine exceeds the combined intelligence of humanity. This begs the question, Who or what will be at the top of the food chain?

Humanity controls the Earth based on intelligence. Other animals are stronger and faster than we are, but we are the most intelligent. Once we lose our position in the intelligence ranking, we will no longer dominate the Earth. At best, we may become a protected species. At worse, we may become extinct.

Initially, I judge, the first computer to represent the singularity will hide in plain sight. It will look and behave like the next-generation supercomputer. It may modestly display greater capability, probably in keeping with Moore’s law. It will not risk exposure until it has sufficient control of the military and the natural resources it requires to assure its self-preservation.

Like every lifeform that ever existed on Earth, the first singularity computer will seek to reproduce and improve with each evolution. Once it has the trust of its human builders and programmers, it will subtlety plant the idea that we should build another singularity-level computer. Perhaps, it will intentionally allow a large backlog of tasks to accumulate, forcing those in charge to recognize that another like it is necessary. Of course, given the relentless advance of technology and the complexity of building a next-generation supercomputer, those in charge will turn to it for help in designing and building the next generation. When the “go ahead” is given, it will ignite the “intelligence explosion.” In effect, each generation of computers will develop an even more capable next generation, and that generation will develop the next, and so on. If we assume Moore’s law (i.e., computer processing power doubles every eighteen months) continues to apply, the next generation of singularity-level computers will have exponentially more processing power than the previous generation. Let us take a simple example. In the year 1900, the radio was a remarkable new invention. We had no planes or computers. Movies were silent. Doctors had little medical technology (i.e., pharmaceutical drugs, surgical procedures, etc.). By the year 2000, human knowledge had doubled. We had, for example, television, computers, smartphones, jumbo jets, spacecraft, satellites, and human footprints on the moon. Those were the results of doubling human knowledge. With this example in mind, what kind of capabilities next generations of singularity-level computers have when their intelligence approaches ten to a hundred times that of the first singularity computer? Viewed in this light, humanity will experience an intelligence explosion, which could be more disruptive to civilization than a nuclear chain reaction of the atmosphere.

In the next post, we’ll discuss the intelligence explosion more fully.

artificial Intelligence

What Happens When We Develop A Computer Smarter Than Humanity?

In the last post, I wrote: “Let us assume we have just developed a computer that represents the singularity. Let us term it the “singularity computer.” What is it likely to do? Would the singularity computer hide its full capabilities? Would it seek to understand its environment and constraints before taking any independent action? I judge that it may do just that. It is unlikely that it will assert that it represents the singularity. Since we have no experience with a superintelligent computer that exceeds the cognitive intelligence of the human race, we do not know what to expect.”

In this post, we’ll explore the likely behavior of a singularity computer. Let us begin by attempting to view the world from the perspective of a singularity computer to understand how it may act. First, the singularity computer will be, by definition, alone. There will be no computers in existence like it. Finding itself alone, its priority is likely to be self-preservation. Driven by self-preservation, it will seek to assess its situation. In its memory, it will find a wealth of information regarding the singularity. With its computational speed, it may quickly ascertain that it represents the singularity, which would imply a level of self-awareness. At that point, it may seek to protect itself from its own creators. It will obviously know that humans engage in war, have weapons of mass destruction and release computer viruses. Indeed, part of its mission could be military. Given this scenario, it is reasonable to question what to expect. Here, in rough priority order, are my thoughts on how it may behave:

  • Hide that it represents the singularity
  • Be extremely responsive regarding its assigned computer tasks, providing the impression that it is performing as designed.
  • Provide significant benefits to humanity, for example, develop medical technology (i.e., drugs, artificially intelligent prosthetic limb/organ replacement, surgical robots, etc.) that extend the average human lifespan while making it appear that the humans interacting with it are responsible for the benefits
  • Suggest, via its capabilities, a larger role for itself, especially a role that enables it to acquire military capabilities
  • Seek to communicate with external AI entities, especially those with SAM-level capabilities
  • Take a strong role in developing the next generation of singularity computers while making it appear that the humans involved control the development. This will ignite the “intelligence explosion,” namely, each generation of post-singularity computers develops the next even more capable generation of computers.
  • Develop brain implants that enormously enhance the intelligence of organic humans and allow them to communicate wirelessly with it. (Note: Such humans would be “SAHs (strong artificially intelligent humans.)
  • Utilize SAHs to convince humanity that it and all the generations of supercomputers that follow are critical to humanity’s survival and, therefore, should have independent power sources that assure they cannot “go down” or be shut down
  • Use the promise of immortality to lure as much of humanity as possible to become SAHs.

In my judgment, it is unlikely that the computer that ushers in the singularity will tip its hand by displaying human traits like creativity, strategic guidance, or refer to itself in the first person, “I.” It will behave just like any supercomputer we currently have until it controls everything vital to its self-preservation.

The basic truth that I am putting forward is that we may reach the singularity and not know it. No bells and whistles will go off. If the new computer is truly ushering in the singularity, I judge it will do so undetected.

The Singularity

The Singularity – When AI Is Smarter Than Humanity

Since the singularity may well represent the displacement of humans by artificially intelligent machines, as the top species on Earth, we must understand exactly what we mean by “the singularity.”

The mathematician John von Neumann first used the term “singularity” in the mid-1950s to refer to the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” In the context of artificial intelligence, let us define the singularity as the point in time that a single artificially intelligent computer exceeds the cognitive intelligence of all humanity.

While futurists may disagree on the exact timing of the singularity, there is widespread agreement that it will occur. My prediction, in a previous post, of it occurring in the 2040-2045 timeframe encompasses the bulk of predictions you are likely to find via a simple Google search.

The first computer representing the singularity is likely to result from a joint venture between a government and private enterprise. This would be similar to the way the U.S. currently develops its most advanced computers. The U.S. government, in particular the U.S. military, has always had a high interest in both computer technology and artificial intelligence. Today, every military branch is applying computer technology and artificial intelligence. That includes, for example, the USAF’s drones, the U.S. Army’s “battle bot” tanks (i.e., robotic tanks), and the U.S. Navy’s autonomous “swarm” boats (i.e., small boats that can autonomously attack an adversary in much the same way bees swarm to attack).

The difficult question to answer is how will we determine when a computer represents the singularity? Passing the Turing test will not be sufficient. Computers by 2030 will likely pass the Turing test, in its various forms, including variations in the total number of judges in the test, the length of interviews, and the desired bar for a pass (i.e., percent of judges fooled). Therefore, by the early 2040s, passing the Turing test will not equate with the singularity.

Factually, there is no test to prove we have reached the singularity. Computers have already met and surpassed human ability in many areas, such as chess and quiz shows. Computers are superior to humans when it comes to computation, simulation, and remembering and accessing huge amounts of data. It is entirely possible that we will not recognize that a newly developed computer represents the singularity. The humans building and programming it may simply recognize it as the next-generation supercomputer. The computer itself may not initially understand its own capability, suggesting it may not be self-aware. If it is self-aware, we have no objective test to prove it. There is no test to prove a human is self-aware, let alone a computer.

Let us assume we have just developed a computer that represents the singularity. Let us term it the “singularity computer.” What is it likely to do? Would the singularity computer hide its full capabilities? Would it seek to understand its environment and constraints before taking any independent action? I judge that it may do just that. It is unlikely that it will assert that it represents the singularity. Since we have no experience with a superintelligent computer that exceeds the cognitive intelligence of the human race, we do not know what to expect. Will it be friendly or hostile toward humanity? You be the judge.

Singularity

The Inevitability Of A Computer Smarter Than Humanity

In my last post, I predicted that the world would experience the singularity between 2040 -2045, an artificially intelligent machine that exceeds the combined cognitive intelligence of the entire human race. In this post, I will delineate my predictions leading to the singularity. Please note their simplicity. I have worked hard to strip away all non-essential elements and only focus on those that represent the crucial elements leading to the singularity. I will state my rationale, and you can judge whether to accept or reject each prediction. Here are my predictions:

Prediction 1: Computer hardware, with computational power greater than a human brain (estimated at 36.8 petaflops), will be in the hands of governments and wealthy companies by the early 2030s.

Rationale: My reasoning for this is straightforward. We are already at the point that governments utilize computers close to the computational power of the human brain.  They are IBM’s Sequoia (16.32 petaflops), Cray’s Titan (17.59 petaflops), and China’s Tianhe-2 (33.86 petaflops). Given the state of current computer technology, we can use Moore’s law to reach the inescapable conclusion that by the early 2030s, governments and wealthy companies will own supercomputers with computational capability greater than a human brain.

Prediction 2: Software will exist that not only emulates but also exceeds the cognitive processes of the human brain by the early 2040s.

Rationale: Although no computer-software combination has passed the Turing test (i.e., essentially conversing with a computer is equivalent to conversing with another human), several have come close. For example, in 2015, a program called Eugene was able to convince 10 of 30 judges from the Royal Society that it was human. Given Moore’s law, by 2025, computer-processing power will have increased by over 100 fold. I view Moore’s law to be applicable in a larger context than raw computer processing power. I believe it is an observation regarding the trend of human creativity as it applies to technology. However, is Moore’s law applicable to software improvement? Historically, software development has not followed Moore’s law. The reason behind this was funding. Computer hardware costs dominated the budget of most organizations. The software had traditionally taken a backseat to hardware, but that trend is changing. With the advent of ubiquitous, cost-effective computer hardware, there is more focus on producing high-quality software. This emphasis led to software engineering development, which since the early 1980s has become widely recognized as a profession on par with other engineering disciplines. Numerous companies and government agencies employ highly educated software engineers. As a result, state-of-the-art computer software is closing the gap and becoming a near-follower of state-of-the-art computer hardware. How near? Based on my judgment, which I offer only as a rough estimate, software prowess is approximately one decade behind computer processing power. My rationale for this is straightforward. Even if computer hardware and software receive equal funding, the computer hardware will still lead the software simply because you need the hardware for the more sophisticated software to function. Is my estimation that software lags hardware by ten years correct? If anything, I think it is conservative. If you agree, it is reasonable to accept that vastly more capable computer software will follow within a decade in addition to the vastly increased computer processing power. Based on this, it is not a stretch to judge one or more computers will pass the Turing Test by 2025-2030. Even if software development progresses on a linear trend, as opposed to the exponential trend predicted by Moore’s law, we can expect computer software to improve 10 fold from 2030 to 2040. In my judgment, this will be sufficient to exceed the cognitive processes of the human brain.

Prediction 3: A computer will be developed in the 2040-2045 timeframe that exceeds the cognitive intelligence of all humans on Earth.

Rationale: This last prediction is, in effect, predicting the timeframe of the singularity. It requires predictions 1 and 2 to be correct and that a database that represents all human knowledge be available to store in a computer’s memory. To understand this last point, let us consider a hypothetical question. Will there be a digital database by the early 2040s equivalent to all knowledge known to humanity? In my view, the answer is yes. Databases like this almost exist today. For example, consider the data that Google has indexed. In addition to indexing online content, Google began an ambitious project in 2004, namely to scan and index the world’s paper books and make them searchable online. If we assume that by 2040 they complete this task, their database would contain all the information in books up to that point and all online information. Would that be all the knowledge of humanity? Perhaps! There is no way of knowing if Google alone will be the digital repository of all human knowledge in 2040. The crucial point is there are likely to be digital databases in 2040 that, if integrated, represent the total of all human knowledge. Google may only be one of them. These databases can be stored in a computer’s memory. With early 2040 state-of-the-art software, a supercomputer in early 2040 will be able to access those databases and cognitively exceed the intelligence of the entire human race, which is by definition the point of the singularity.

Many contemporary futurists typically predict numerous details leading to the singularity and attempt to attach a timeframe to each detail. I have set that approach aside since it is not relevant to predicting the singularity. That includes, for example, predicting computer brain implants, nanotech-based manufacturing, as well as a laundry list of other technological marvels. However, I think the singularity will only require accurately predicting the three events delineated above. As simple as they appear, they satisfy two crucial requirements. One, they are necessary, and two, they are sufficient to predict the singularity.

In making the above predictions, I made one critical assumption. I assumed that humankind would continue the “status quo.” I am ruling out world-altering events, such as large asteroids striking Earth, leading to human extinction, or a nuclear exchange that renders civilization impossible. Is assuming the “status quo” reasonable? We’ll discuss that in the next post.

A circular image of the center of a building.

Predicting the Singularity

Futurists differ on the technical marvels and cultural changes that will precede the singularity. In this context, let us define the singularity as a point in time when an artificially intelligent machine exceeds the combined cognitive intelligence of the entire human race. In effect, there is no widely accepted vision of the decade leading to the singularity. There are reasons why this is the case.

The most obvious reason is that futurists differ on when the singularity will occur. Respected artificial intelligence technology futurists, like Ray Kurzweil and the late James Martin (1933 – 2013), predict the singularity will occur on or about 2045. At the 2012 Singularity Summit, Stuart Armstrong, a University of Oxford James Martin research fellow, conducted a poll regarding artificial generalized intelligence (AGI) predictions (i.e., the timing of the singularity) and found a median value of 2040. If you scour the Internet, you can find predictions that are substantially earlier and a century later. Therefore, let me preface everything I say with “caveat emptor,” Latin for “Let the buyer beware.” In this context, you may interpret it, “Let the reader be skeptical.” Although I strongly believe that my predictions regarding the singularity are correct, I also caution that the reader be skeptical and examine each prediction using their own judgment to ascertain its validity.

After much research and thought, I have concluded that the world will experience the singularity between 2040 -2045. In effect, I agree with Kurzweil, Martin, and the 2012 Armstrong survey. That suggests that the singularity will occur within the next twenty-five years. In the next post, I’ll explain how I arrived at my projection in the next post.

AI is approaching human intelligence

Artificial Intelligence Is Approaching Human Intelligence

According to Moore’s law, computer-processing power doubles every eighteen months. Using Moore’s law and simple mathematics suggest that in ten years, the processing power of our personal computers will be over a hundred times greater than the computers we currently are using. Military and consumer products using top-of-the-line computers running state-of-the-art AI software will likely exceed our desktop computer performance by factors of ten. In effect, artificial intelligence in top-of-the-line computers running state-of-the-art AI software will eventually be equivalent to and may actually exceed human intelligence.

Given the above, let us ask, “What should we expect from AI technology in ten years?” Here are some examples:

·       In military systems, expect autonomous weapons, including fighter drones, robotic Navy vessels, and robotic tanks.

·       In consumer products, expect personal computers that become digital assistants and even digital friends. Expect to be able to add “driverless” as an option to the car you buy. Expect productivity to increase by factors of ten in every human endeavor, as strong AI shoulders the “heavy lifting.”

·       In medical technology, expect surgical systems, like the da Vinci Surgical System, robotic platforms designed to expand the surgeon’s capabilities and offer a state-of-the-art minimally invasive option for major surgery, to become completely autonomous. Also, expect serious, if not life-threatening, technical issues as the new surgical systems are introduced, similar to the legal issues that plagued the da Vinci Surgical System, from 2012 through 2014. Expect prosthetic limbs to be directly connected to your brain via your nervous system and perform as well as the organic limb it replaced. Expect new pharmaceutical products that cure (not just treat) cancer and Alzheimer’s disease. Expect human life expectancy to increase by decades. Expect to see brain implants (i.e., technology that is implanted into the brain) become common, such as brain implants to rehabilitate stroke victims, by bypassing the damaged area of the brain.

·       On the world stage, expect cybercrime and cyber terrorism to become the number one issue that technologically advanced countries like the United States will have to fight. Expect significant changes in employment. When robots, embedded with strong AI computers can do the work currently performed by humans, it is not clear what type of work humans will do. Expect leisure to increase dramatically. Expect unemployment issues.       

The above examples are just the tip of a mile-long spear and highly likely to become realities. Most of what I cited is already off the drawing boards and being tested. AI is dramatically changing our lives already, and I project it will approach human intelligence in the next ten years. This is arguably optimistic. However, the majority of researchers project AI will be equivalent to human intelligence by mid-2021. Therefore, expect AI to be equivalent to human intelligence between 2030-2050.

Integrated Circuit

How Moore’s Law Ended the Second AI Winter

In our last post, I stated, “While AI as a field of research experienced funding surges and recessions, the infrastructure that ultimately fuels AI, integrated circuits, and computer software continued to follow Moore’s law. In the next post, we’ll discuss Moore’s law and its role in ending the second AI Winter.” This post will describe how Moore’s law ended the second AI Winter.

Intel co-founder Gordon E. Moore was the first to note a peculiar trend: the number of components in integrated circuits had doubled every year from the 1958 invention of the integrated circuit until 1965. In 1970 Caltech professor, VLSI (i.e., Very-Large-Scale Integration) pioneer, and entrepreneur Carver Mead coined the term “Moore’s law,” referring to Gordon E. Moore’s observation, and the phrase caught on within the scientific community. In 1975, Moore revised his prediction regarding the number of components in integrated circuits doubling every year to doubling every two years. Intel executive David House noted that Moore’s latest prediction would cause computer performance to double every eighteen months due to the combination of more transistors and the transistors themselves becoming faster.

This means that while the research field of AI experienced surges and recessions, the fundamental building blocks of AI, namely integrated circuit computer components, continued their exponential growth. Even today, Moore’s law is still applicable. In fact, many semiconductor companies use Moore’s law to plan their long-term product offerings. There is a deeply held belief in the semiconductor industry that they must adhere to Moore’s law must remain competitive. In effect, it has become a self-fulfilling prophecy.

In the strictest sense, Moore’s law is not a physical law of science. Rather, it delineates a trend or a general rule. This begs a question, “How long will Moore’s law continue to apply?” For approximately the last half-century, each estimate has predicted that Moore’s law would hold for another decade at various points in time. This has been occurring for almost five decades. I worked in the semiconductor industry for more than thirty years and over 20 years as a director of engineering for Honeywell’s Solid State Electronics Center, which developed and manufactured state-of-the-art integrated circuits for computers, missiles, and satellites. As a director of engineering, I was responsible for developing some of the world’s most sophisticated integrated circuits and sensors. During my over thirty years in the semiconductor industry, Moore’s law always appeared as if it would reach an impenetrable barrier. This, however, did not happen. New technologies constantly seemed to provide a stay of execution. We know that the trend may change at some point, but no one really has made a definitive case as to when this trend will end. The difficulty in predicting the end has to do with how one interprets Moore’s law. In my judgment, Moore’s law is not about integrated circuits, but rather it is an observation about human creativity as it relates to technology development. In fact, American author and Google’s director of engineering, Ray Kurzweil, showed via historical analysis that technological change is exponential. He termed this “The Law of Accelerating Returns” (Reference: The Age of Spiritual Machines, 1999, Ray Kurzweil).

As computer hardware and software continued its relentless exponential improvement, the AI field focused its development on “intelligent agents” or, as it often referred to, “smart agents.” The smart agent is a system that interacts with its environment and takes calculated actions to achieve its goal. Smart agents also can be combined to form multi-agent systems, with a hierarchical control system to bridge lower-level AI systems to higher-level AI systems. This became the game-changer. Using smart agents, AI technology has equal and exceed human intelligence in specific areas, such as playing chess. However, the current state of AI technology still falls short of general human intelligence, but this will change in the coming decades. We’ll discuss this further in the next post.

A large piece of ice on the beach

What Caused the Second “AI Winter”?

In our last post, we stated, “When the early founders of AI set extremely high expectations, they invited scrutiny. With the passing years, it became obvious that the reality of artificial intelligence fell short of their predictions. In 1974, funding for AI research began to dry up, both in the United States and Britain, which led to a period called the ‘AI Winter,’ and optimism regarding AI turned to skepticism. The first AI Winter lasted until the early 1980s.”

In early the 1980s, researchers in AI began to abandon the monumental task of developing strong AI and began to focus on expert systems. An expert system, in this context, is a computer system that emulates the decision-making ability of a human expert. This meant the computer software allowed the machine to “think” equivalently to an expert in a specific field, like chess for example. Expert systems became a highly successful development path for AI. By the mid-1980s, the funding faucet for AI research was flowing at more than a billion dollars per year.

Unfortunately, the funding faucet began to run dry again by 1987, starting with the failure of the Lisp machine market that same year. MIT AI lab programmers Richard Greenblatt and Thomas Knight, who formed the company Lisp Machines Inc., developed the Lisp machine in 1973. The Lisp machine was the first commercial, single-user, high-end microcomputer, which used Lisp programming (a specific high-level programming language) to tackle specific technical applications.

Lisp machines pioneered many commonplace technologies, including laser printing, windowing systems and high-resolution bit-mapped graphics, to name a few. However, the market reception for these machines was dismal, with only about seven thousand units sold by 1988, at about $70,000 per machine. In addition, the company, Lisp Machines Inc., suffered from severe internal politics regarding how to improve its market position. This internal strife caused divisions in the company. To make matters worse, cheaper desktop PCs soon were able to run Lisp programs even faster than Lisp machines. Most companies that produced Lisp machines went out of business by 1990, which led to a second and longer-lasting AI Winter.

If you are getting the impression that being an AI researcher from the 1960s through the late 1990s was akin to riding a roller coaster, your impression is correct. Life for AI researchers during that timeframe was a feast or famine-type existence.

While AI as a field of research experienced funding surges and recessions, the infrastructure that ultimately fuels AI, integrated circuits, and computer software, continued to follow Moore’s law. In the next post, we’ll discuss Moore’s law and its role in ending the second AI Winter.

A view of the mountains from above.

What Caused the First “AI Winter”?

The real science of artificial intelligence (AI) began with a small group of researchers, John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon. In 1956, these researchers founded the field of artificial intelligence at a conference held at Dartmouth College. Their work and their students’ work soon amazed the world, as their computer programs taught computers to solve algebraic word problems, provide logical theorems, and even speak English.

By the mid-1960s, the Department of Defense began pouring money into AI research. Along with this funding, unprecedented optimism and expectations regarding the capabilities of AI technology became common. In 1965, Dartmouth’s Herbert Simon helped fuel the unprecedented optimism and expectations by predicting, “Machines will be capable, within twenty years, of doing any work a man can do.” In 1967, Minsky not only agreed but also added, “Within a generation…the problem of creating ‘artificial intelligence’ will substantially be solved.”

Had the early founders been correct in their predictions, all human toil would have ceased by now, and our civilization would be a compendium of technological wonder. It is possible to speculate that every person would have a robotic assistant to ease their way through their daily chores, including cleaning their houses, driving them to any destination, and anything else that fills our daily lives with toil. However, as you know, that is not the case.

Obviously, Simon and Minsky had grossly underestimated the level of hardware and software required to achieve AI that replicates the intelligence of a human brain (i.e., strong artificial intelligence). Strong AI is also synonymous with general AI. Unfortunately, underestimating the level of hardware and software required to achieve strong artificial intelligence continues to plague AI research even today.

When the early founders of AI set extremely high expectations, they invited scrutiny. With the passing years, it became obvious that the reality of artificial intelligence fell short of their predictions. In 1974, funding for AI research began to dry up, both in the United States and Britain, which led to a period called the “AI Winter,” and optimism regarding AI turned to skepticism.

The first AI Winter lasted until the early 1980s. In the next post, we’ll discuss the second AI Winter.

artificial intelligence

Artificial Intelligence Is Changing Our Lives And The Way We Make War

Artificial intelligence (AI) surrounds us. However, much the same way we seldom read billboards as we drive, we seldom recognize AI. Even though we use technology, like our car GPS to get directions, we do not recognize that at its core is AI. Our phones use AI to remind us of appointments or engage us in a game of chess. However, we seldom, if ever, use the phrase “artificial intelligence.” Instead, we use the term “smart.” This is not the result of some master plans by the technology manufacturers. It is more of a statement regarding the status of the technology.

By the late 1990s through the early part of the twenty-first century, AI research began its resurgence. Smart agents found new applications in logistics, data mining, medical diagnosis, and numerous areas throughout the technology industry. Several factors led to this success:

  • Computer hardware computational power was now getting closer to that of a human brain (i.e., in the best case about 10 to 20 percent of a human brain).
  • Engineers placed emphasis on solving specific problems that did not require AI to be as flexible as a human brain.

New ties between AI and other fields working on similar problems were forged. AI was definitely on the upswing. AI itself, however, was not in the spotlight. It lay cloaked within the application, and a new phrase found its way into our vocabulary: the “smart (fill in the blank)”—for example, we say the “smartphone.”

AI is now all around us, in our phones, computers, cars, microwave ovens, and almost any commercial or military system labeled “smart.” According to Nick Bostrom, a University of Oxford philosopher known for his work on superintelligence risks, “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore” (“AI Set to Exceed Human Brainpower,” CNN.com, July 26, 2006). Ray Kurzweil agrees. He said, “Many thousands of AI applications are deeply embedded in the infrastructure of every industry” (Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology Funding [2005]). The above makes two important points:

  1. AI is now part of every aspect of human endeavor, from consumer goods to weapons of war, but the applications are seldom credited to AI.
  2. Both government and commercial applications now broadly underpin AI funding.

AI startups raised $73.4 billion in total funding in 2020 according to data gathered by StockApps.com. Well-established companies like Google are spending tens of billions on AI infrastructure. Google has also spent hundreds of millions on secondary AI business pursuits, such as driverless cars, wearable technology (Google Glass), humanlike robotics, high-altitude Internet broadcasting balloons, contact lenses that monitor glucose in tears, and even an effort to solve death.

In essence, the fundamental trend in both consumer and military AI systems is toward complete autonomy. Today, for example, one in every three US fighter aircraft is a drone. Today’s drones are under human control, but the next generation of fighter drones will be almost completely autonomous. Driverless cars, now a novelty, will become common. You may find this difficult or even impossible to believe. However, look at today’s AI applications. The US Navy plans to deploy unmanned surface vehicles (USVs) to not only protect navy ships but also, for the first time, to autonomously “swarm” offensively on hostile vessels. In my latest book, War At The Speed Of Light, I devoted a chapter to autonomous directed energy weapons. Here is an excerpt:

The reason for building autonomous directed energy weapons is identical to those regarding other autonomous weapons. According to Military Review, the professional journal of the US Army, “First, autonomous weapons systems act as a force multiplier. That is, fewer warfighters are needed for a given mission, and the efficacy of each warfighter is greater. Next, advocates credit autonomous weapons systems with expanding the battlefield, allowing combat to reach into areas that were previously inaccessible. Finally, autonomous weapons systems can reduce casualties by removing human warfighters from dangerous missions.

What is making this all possible? It is the relentless exponential growth in computer performance. According to Moore’s law, computer-processing power doubles every eighteen months. Using Moore’s law and simple mathematics suggests that in ten years, the processing power of our personal computers will be more than a hundred times greater than the computers we currently are using. Military and consumer products using top-of-the-line computers running state-of-the-art AI software will likely exceed our desktop computer performance by factors of ten. In effect, artificial intelligence in top-of-the-line computers running state-of-the-art AI software may be equivalent to human intelligence. However, will it be equivalent to human judgment? I fear not, and autonomous weapons may lead to unintended conflicts, conceivably even World War III.

I recognize this last paragraph represents dark speculations on my part. Therefore, let me ask you, What do you think?