All posts by admin

artificial intelligence

Artificial Intelligence Is Changing Our Lives And The Way We Make War

Artificial intelligence (AI) surrounds us. However, much the same way we seldom read billboards as we drive, we seldom recognize AI. Even though we use technology, like our car GPS to get directions, we do not recognize that at its core is AI. Our phones use AI to remind us of appointments or engage us in a game of chess. However, we seldom, if ever, use the phrase “artificial intelligence.” Instead, we use the term “smart.” This is not the result of some master plans by the technology manufacturers. It is more of a statement regarding the status of the technology.

By the late 1990s through the early part of the twenty-first century, AI research began its resurgence. Smart agents found new applications in logistics, data mining, medical diagnosis, and numerous areas throughout the technology industry. Several factors led to this success:

  • Computer hardware computational power was now getting closer to that of a human brain (i.e., in the best case about 10 to 20 percent of a human brain).
  • Engineers placed emphasis on solving specific problems that did not require AI to be as flexible as a human brain.

New ties between AI and other fields working on similar problems were forged. AI was definitely on the upswing. AI itself, however, was not in the spotlight. It lay cloaked within the application, and a new phrase found its way into our vocabulary: the “smart (fill in the blank)”—for example, we say the “smartphone.”

AI is now all around us, in our phones, computers, cars, microwave ovens, and almost any commercial or military system labeled “smart.” According to Nick Bostrom, a University of Oxford philosopher known for his work on superintelligence risks, “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore” (“AI Set to Exceed Human Brainpower,” CNN.com, July 26, 2006). Ray Kurzweil agrees. He said, “Many thousands of AI applications are deeply embedded in the infrastructure of every industry” (Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology Funding [2005]). The above makes two important points:

  1. AI is now part of every aspect of human endeavor, from consumer goods to weapons of war, but the applications are seldom credited to AI.
  2. Both government and commercial applications now broadly underpin AI funding.

AI startups raised $73.4 billion in total funding in 2020 according to data gathered by StockApps.com. Well-established companies like Google are spending tens of billions on AI infrastructure. Google has also spent hundreds of millions on secondary AI business pursuits, such as driverless cars, wearable technology (Google Glass), humanlike robotics, high-altitude Internet broadcasting balloons, contact lenses that monitor glucose in tears, and even an effort to solve death.

In essence, the fundamental trend in both consumer and military AI systems is toward complete autonomy. Today, for example, one in every three US fighter aircraft is a drone. Today’s drones are under human control, but the next generation of fighter drones will be almost completely autonomous. Driverless cars, now a novelty, will become common. You may find this difficult or even impossible to believe. However, look at today’s AI applications. The US Navy plans to deploy unmanned surface vehicles (USVs) to not only protect navy ships but also, for the first time, to autonomously “swarm” offensively on hostile vessels. In my latest book, War At The Speed Of Light, I devoted a chapter to autonomous directed energy weapons. Here is an excerpt:

The reason for building autonomous directed energy weapons is identical to those regarding other autonomous weapons. According to Military Review, the professional journal of the US Army, “First, autonomous weapons systems act as a force multiplier. That is, fewer warfighters are needed for a given mission, and the efficacy of each warfighter is greater. Next, advocates credit autonomous weapons systems with expanding the battlefield, allowing combat to reach into areas that were previously inaccessible. Finally, autonomous weapons systems can reduce casualties by removing human warfighters from dangerous missions.

What is making this all possible? It is the relentless exponential growth in computer performance. According to Moore’s law, computer-processing power doubles every eighteen months. Using Moore’s law and simple mathematics suggests that in ten years, the processing power of our personal computers will be more than a hundred times greater than the computers we currently are using. Military and consumer products using top-of-the-line computers running state-of-the-art AI software will likely exceed our desktop computer performance by factors of ten. In effect, artificial intelligence in top-of-the-line computers running state-of-the-art AI software may be equivalent to human intelligence. However, will it be equivalent to human judgment? I fear not, and autonomous weapons may lead to unintended conflicts, conceivably even World War III.

I recognize this last paragraph represents dark speculations on my part. Therefore, let me ask you, What do you think?

A-life

Should We Consider Strong Artificially Intelligent Machines (SAMs) A New Life-Form?

What is a strong artificially intelligent machine (SAM)? It is a machine whose intelligence equals that of a human being. Although no SAM currently exists, many artificial intelligence researchers project SAMs will exist by the mid-21st Century. This has major implications and raises an important question, Should we consider SAMs a new life-form? Numerous philosophers and AI researchers have addressed this question. Indeed, the concept of artificial life dates back to ancient myths and stories. The best known of these is Mary Shelley’s novel Frankenstein, published in 1823. In 1986, American computer scientist Christopher Langton, however, formally established the scientific discipline that studies artificial life (i.e., A-life).

No current definition of life considers any A-life simulations to be alive in the traditional sense (i.e., constituting a part of the evolutionary process of any ecosystem). That view of life, however, is beginning to change as artificial intelligence comes closer to emulating a human brain. For example, Hungarian-born American mathematician John von Neumann (1903–1957) asserted, “life is a process which can be abstracted away from any particular medium.” In effect, this suggests that strong AI represents a new life-form, namely A-life.

In the early 1990s, ecologist Thomas S. Ray asserted that his Tierra project, a computer simulation of artificial life, did not simulate life in a computer, but synthesized it. This begs the following question, “How do we define A-life?”

The earliest description of A-life that comes close to a definition emerged from an official conference announcement in 1987 by Christopher Langton, published subsequently in the 1989 book Artificial Life: The Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems:

Artificial life is the study of artificial systems that exhibit behavior characteristics of natural living systems. It is the quest to explain life in any of its possible manifestations, without restriction to the particular examples that have evolved on Earth. This includes biological and chemical experiments, computer simulations, and purely theoretical endeavors. Processes occurring on molecular, social, and evolutionary scales are subject to investigation. The ultimate goal is to extract the logical form of living systems.

There is little doubt that both philosophers and scientists lean toward recognizing A-life as a new life-form. For example, noted philosopher and science fiction writer Sir Arthur Charles Clarke (1917–2008) wrote in his book 2010: Odyssey Two, “Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect.” Noted cosmologist and physicist Stephen Hawking (b. 1942) darkly speculated during a speech at the Macworld Expo in Boston, “I think computer viruses should count as life. I think it says something about human nature that the only form of life we have created so far is purely destructive. We’ve created life in our own image” (Daily News, [August 4, 1994]). The main point is that we are likely to consider strong AI a new form of life.

After reading this post, What do you think?

artificial intelligence

Artificial Intelligence Threatens Human Extinction

While researching my new book, War At The Speed Of Light, I surfaced some important questions regarding the threat artificial intelligence poses to humanity. For example, Will your grandchildren face extinction? Even worse, will they become robotic slaves to a supercomputer?

Humanity is facing its greatest challenge, artificial intelligence (AI). Recent experiments suggest that even primitive artificially intelligent machines can learn deceit, greed, and self-preservation without being programmed to do so. There is alarming evidence that artificial intelligence, without legislation to police its development, will displace humans as the dominant species by the end of the twenty-first century.

There is no doubt that AI is the new scientific frontier, and it is making its way into many aspects of our lives. Our world includes “smart” machines with varying degrees of AI, including touch-screen computers, smartphones, self-parking cars, smart bombs, heart pacemakers, and brain implants to treat Parkinson’s disease. In essence, AI is changing the cultural landscape, and we are embracing it at an unprecedented rate. Currently, humanity is largely unaware of the potential dangers that strong artificially intelligent machines pose. In this context, the word “strong” signifies AI greater than human intelligence.

Most of humanity perceives only the positive aspects of AI technology. This includes robotic factories, like Tesla Motors, which manufactures electric cars that are ecofriendly, and the da Vinci Surgical System, a robotic platform designed to expand the surgeon’s capabilities and offer a state-of-the-art minimally invasive option for major surgery. These are only two of many examples of how AI is positively affecting our lives. However, there is a dark side. For example, Gartner Inc., a technology research group, forecasts robots and drones will replace a third of all workers by 2025. Could AI create an unemployment crisis?  As AI permeates the medical field, the average human life span will increase. Eventually, strong artificially intelligent humans (SAHs), with AI brain implants to enhance their intelligence and cybernetic organs, will become immortal. Will this exacerbate the worldwide population crisis already surfaced as a concern by the United Nations? By 2045, some AI futurists predict that a single strong artificially intelligent machine (SAM) will exceed the cognitive intelligence of the entire human race. How will SAMs view us? Objectively, humanity is an unpredictable species. We engage in wars, develop weapons capable of destroying the world and maliciously release computer viruses. Will SAMs view us as a threat? Will we maintain control of strong AI, or will we fall victim to our own invention?

I recognize that this post raises more questions than answers. However, I thought it important to share these questions with you. In my new book, War At The Speed Of Light, I devote an entire chapter to autonomous directed energy weapons. I surface these questions, Will autonomous weapons replace human judgment and result in unintended devastating conflicts? Will they ignite World War III? I also provide recommendations to avoid these unintended conflicts. For more insight, browse the book on Amazon

Two crossed lightsaber swords in front of a space background.

An Extract From the Intro of War At The Speed Of Light

The pace of warfare is accelerating. In fact, according to the Brookings Institution, a nonprofit public policy organization, “So fast will be this process [command and control decision-making], especially if coupled to automatic decisions to launch artificially intelligent autonomous weapons systems capable of lethal outcomes, that a new term has been coined specifically to embrace the speed at which war will be waged: hyperwar.”

The term “hyperwar” adequately describes the quickening pace of warfare resulting from the inclusion of AI into the command, control, decision-making, and weapons of war. However, to my mind, it fails to capture the speed of conflict associated with directed energy weapons. To be all-inclusive, I would like to suggest the term “c-war.” In Einstein’s famous mass-energy equivalent equation, E = mc2, the letter “c” is used to denote the speed of light in a vacuum. [For completeness, E means energy and m mass.] Surprisingly, the speed of light in the Earth’s atmosphere is almost equal to its velocity in a vacuum. On this basis, I believe c-war more fully captures the new pace of warfare.

Unfortunately, c-war, war at the speed of light, may remove human judgment from the realm of war altogether, which could have catastrophic ramifications. If you think this is farfetched, consider this Cold War account, where new technology almost plunged the world into nuclear war. This historical account is from RAND Corporation, a nonprofit institution that helps improve policy and decision making through research and analysis:

Lt. Col. Stanislav Petrov settled into the commander’s chair in a secret bunker outside Moscow. His job that night was simple: Monitor the computers that were sifting through satellite data, watching the United States for any sign of a missile launch. It was just after midnight, Sept. 26, 1983.

A siren clanged off the bunker walls. A single word flashed on the screen in front of him.

“Launch.”

Petrov’s computer screen now showed five missiles rocketing toward the Soviet Union. Sirens wailed. Petrov held the phone to the duty officer in one hand, an intercom to the computer room in the other. The technicians there were telling him they could not find the missiles on their radar screens or telescopes.

It didn’t make any sense. Why would the United States start a nuclear war with only five missiles? Petrov raised the phone and said again:

“False alarm.”

For a few terrifying moments, Stanislav Petrov stood at the precipice of nuclear war. By mid-1983, the Soviet Union was convinced that the United States was preparing a nuclear attack. The computer system flashing red in front of him was its insurance policy, an effort to make sure that if the United States struck, the Soviet Union would have time to strike back.

But on that night, it had misread sunlight glinting off cloud tops.

“False alarm.” The duty officer didn’t ask for an explanation. He relayed Petrov’s message up the chain of command.

The world owes Lt. Col. Stanislav Petrov an incalculable debt. His judgment spared the world a nuclear holocaust. Now, ask yourself this simple question: If those systems Petrov monitored were autonomous (i.e., artificially intelligent), would they have initiated World War III? I believe this is a profound question, and that it is possible to make persuasive arguments on either side. However, would you want to leave the fate of the world to an artificially intelligent system?

I have devoted a significant portion of my career to developing AI for military applications. My experience leads me to conclude today’s technology cannot replicate human judgment. Therefore, I think an AI system replacing Petrov may have initiated World War III. I also believe US military planners are acutely aware of this and are taking steps to defend the US against such a mishap. As we discussed earlier, their actions could disrupt the doctrine of MAD, which prevents nuclear war via the threat of mutually assured destruction. Some term this “the balance of terror.” If any country were able to disrupt the doctrine of MAD, they would tilt the balance of terror.

A news paper with the word news written in it.

Breaking News: War At The Speed Of Light

 

New Book by Louis A. Del Monte Grapples with US’ Development of Star Trek-like Weapons

 

Directed Energy Weapons and the Future of 21st Century Warfare

 

Minneapolis, Minnesota, March 2, 2021: For many Americans, the idea of laser weapons and force field shields may be more at place in a Star Trek film than on the battlefield, but the US development and deployment of directed energy weapons is rapidly changing that reality in 21st Century Warfare. Louis Del Monte’s new book, War at the Speed of Light (Potomac Books, March 2021), describes the revolutionary and ever-increasing role of directed-energy weapons, such as laser, microwave, electromagnetic pulse, and cyberspace weapons.

As potential adversaries develop hypersonic missiles, missile swarming tactics, and cyberspace weapons, the US military has turned to directed energy weapons for defensive and offensive purposes. Unfortunately, though, in War at the Speed of Light, Del Monte argues that these weapons can completely disrupt the fragile compromises that have kept the world safe through the Cold War.

“Directed energy weapons have the potential to disrupt the doctrine of Mutually Assured Destruction, which has kept the major powers of the world from engaging in a nuclear war,” said Del Monte.

Del Monte analyzes how modern warfare is changing in three fundamental ways: the pace of war is quickening, the rate at which weapons project devastation reaches the speed of light, and cyberspace is now officially a battlefield. In this acceleration of combat from “Hyperwar” to “C-War,” an acceleration from computer speed to the speed of light, War at the Speed of Light shows how disturbingly close the world is to losing any deterrence to nuclear warfare.

Book Reviews

  • “Louis Del Monte has given us a fascinating, sophisticated, and at times disturbing tour of the next stage of warfare, in which directed energy weapons inflict damage at the speed of light.  In terms readily accessible to the general public, he describes how weapons that use energy sources such as lasers, microwaves, and electromagnetic pulses have the potential to profoundly change the balance of power and revolutionize the nature of conflict.” Mitt Regan, McDevitt Professor of Jurisprudence, Co-Director, Center on National Security and the Law, Georgetown University Law Center
  • “Louis Del Monte provides a thought-provoking look at the ever-increasing and revolutionary role of directed energy weapons in warfare… Most importantly, Del Monte surfaces the threat that directed energy weapons pose to disrupting the doctrine of MAD (i.e., mutually assured destruction), which has kept the major powers of the world from engaging in a nuclear war.” COL Christopher M. Korpela, Ph.D.

The book is available at bookstores, from Potomac Books, and on Amazon.

Louis A. Del Monte is available for radio, podcast, and television interviews, as well as writing op-ed pieces for major media outlets. Feel free to contact him directly by email at ldelmonte@delmonteagency.com and phone at 952-261-4532.

To request a book for review, contact Jackson Adams by email at jadams30@unl.edu.

About Louis A. Del Monte

Louis A. Del Monte is an award-winning physicist, inventor, futurist, featured speaker, and CEO of Del Monte and Associates, Inc. He has authored a formidable body of work, including War At The Speed Of Light (2021), Genius Weapons (2018), Nanoweapons (2016), and Amazon charts #1 bestseller The Artificial Intelligence Revolution (2014). Major magazines like the Business Insider, The Huffington Post, The Atlantic, American Security Today, Inc., CNBC, and the New York Post have featured his articles or quoted his views on artificial intelligence and military technology.