Tag Archives: A.I. artificial intelligence

Artificial Intelligence in Warfare

Artificial Intelligence (AI) is rapidly reshaping every domain it touches—from commerce and communication to medicine and education. But perhaps no transformation is as consequential or as controversial as its application in modern warfare. AI is revolutionizing how wars are fought, who fights them, and what it means to wield power in the 21st century.

In Genius Weapons (Prometheus, 2018), I explored the trajectory of intelligent weapons systems, tracing how developments in machine learning, robotics, and sensor technologies were converging to create systems that could not only assist but potentially replace human decision-makers in the fog of war. Today, the core themes of that book have become more urgent than ever.

From Decision Support to Autonomous Lethality

AI systems in the military began as decision-support tools—systems designed to analyze vast datasets, identify threats, or optimize logistics. Today, we see a dramatic escalation in their roles. Armed drones now operate with increasing autonomy, capable of identifying and engaging targets without direct human input. Surveillance platforms process terabytes of data in real-time using AI, flagging potential threats faster than any analyst could.

Perhaps the most transformative development is the emergence of autonomous weapons systems—machines that can select and engage targets on their own. As I outlined in Genius Weapons, these systems represent a paradigm shift, not only in capability but in accountability. When a machine makes the decision to kill, who is responsible? The programmer? The commander? The algorithm?

Geopolitical Implications and the AI Arms Race

Nations around the world are investing significant resources in military AI. The United States, China, Russia, and Israel are leading the charge, each with different doctrines and levels of transparency. China’s People’s Liberation Army, for instance, has explicitly described  “intelligentized warfare”—a term used in Chinese military doctrine to describe the integration of AI and advanced technologies into all aspects of warfare. They view it as the future of military power, investing in AI for command decision-making, autonomous drones, and cyber operations.

This arms race has created what analysts call an “AI Cold War,” where nations are not just building weapons, but reshaping the entire military ecosystem—intelligence, command and control, logistics, and cyber operations—with AI at its core. The dangers of this race are not hypothetical. As I warned in Genius Weapons, when multiple actors rush to deploy systems whose full capabilities and limitations are not yet understood, the risk of unintended escalation grows exponentially.

The Ethics of Killing Without Conscience

Perhaps the most profound concern is ethical. Rules of engagement and international law bind human soldiers, and, crucially, they are expected to apply judgment and moral reasoning in combat. Machines do not possess empathy, remorse, or conscience. Can we entrust machines with decisions that involve life and death?

There is a growing international movement to ban or strictly regulate lethal autonomous weapons, spearheaded by the Campaign to Stop Killer Robots and supported by a range of nongovernmental organizations (NGOs), ethicists, and United Nations (UN) bodies. However, as I argued in Genius Weapons, the genie is already out of the bottle. The challenge now is not how to stop these technologies, but how to govern them through transparency, human oversight, and international norms.

Conclusion: The Need for Intelligent Policy

AI in warfare is neither inherently evil nor inherently good—it is a tool. But unlike conventional weapons, it introduces radical new dynamics: speed, scale, unpredictability, and the potential for machines to act beyond human control. The real challenge lies in ensuring that this powerful technology is guided by equally powerful ethics, laws, and human oversight.

As we stand at the edge of a new era in warfare, Genius Weapons remains a call to think critically about how we build, deploy, and restrain the machines we create. The future of war may be intelligent, but whether it will embody humane principles depends entirely on us.

A-life

Should We Consider Strong Artificially Intelligent Machines (SAMs) A New Life-Form?

What is a strong artificially intelligent machine (SAM)? It is a machine whose intelligence equals that of a human being. Although no SAM currently exists, many artificial intelligence researchers project SAMs will exist by the mid-21st Century. This has major implications and raises an important question, Should we consider SAMs a new life-form? Numerous philosophers and AI researchers have addressed this question. Indeed, the concept of artificial life dates back to ancient myths and stories. The best known of these is Mary Shelley’s novel Frankenstein, published in 1823. In 1986, American computer scientist Christopher Langton, however, formally established the scientific discipline that studies artificial life (i.e., A-life).

No current definition of life considers any A-life simulations to be alive in the traditional sense (i.e., constituting a part of the evolutionary process of any ecosystem). That view of life, however, is beginning to change as artificial intelligence comes closer to emulating a human brain. For example, Hungarian-born American mathematician John von Neumann (1903–1957) asserted, “life is a process which can be abstracted away from any particular medium.” In effect, this suggests that strong AI represents a new life-form, namely A-life.

In the early 1990s, ecologist Thomas S. Ray asserted that his Tierra project, a computer simulation of artificial life, did not simulate life in a computer, but synthesized it. This begs the following question, “How do we define A-life?”

The earliest description of A-life that comes close to a definition emerged from an official conference announcement in 1987 by Christopher Langton, published subsequently in the 1989 book Artificial Life: The Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems:

Artificial life is the study of artificial systems that exhibit behavior characteristics of natural living systems. It is the quest to explain life in any of its possible manifestations, without restriction to the particular examples that have evolved on Earth. This includes biological and chemical experiments, computer simulations, and purely theoretical endeavors. Processes occurring on molecular, social, and evolutionary scales are subject to investigation. The ultimate goal is to extract the logical form of living systems.

There is little doubt that both philosophers and scientists lean toward recognizing A-life as a new life-form. For example, noted philosopher and science fiction writer Sir Arthur Charles Clarke (1917–2008) wrote in his book 2010: Odyssey Two, “Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect.” Noted cosmologist and physicist Stephen Hawking (b. 1942) darkly speculated during a speech at the Macworld Expo in Boston, “I think computer viruses should count as life. I think it says something about human nature that the only form of life we have created so far is purely destructive. We’ve created life in our own image” (Daily News, [August 4, 1994]). The main point is that we are likely to consider strong AI a new form of life.

After reading this post, What do you think?

Digital illustration of a human head with glowing neural connections representing brain activity and intelligence.

When Will an Artificially Intelligent Machine Display and Feel Human Emotions? Part 2/2

In our last post, we raised the question: “Will an intelligent machine ever be able to completely replicate a human mind?” Let’s now address it.

Experts disagree. Some experts—such as English mathematical physicist, recreational mathematician, and philosopher Roger Penrose—argue there is a limit as to what intelligent machines can do. Most experts, however, including Ray Kurzweil, argue that it will eventually be technologically feasible to copy the brain directly into an intelligent machine and that such a simulation will be identical to the original. The implication is that the intelligent machine will be a mind and be self-aware.

This begs one big question: “When will the intelligent machines become self-aware?”

A generally accepted definition is that a person is conscious if that person is aware of his or her surroundings. If you are self-aware, it means you are self-conscious. In other words you are aware of yourself as an individual or of your own being, actions, and thoughts. To understand this concept, let us start by exploring how the human brain processes consciousness. To the best of our current understanding, no one part of the brain is responsible for consciousness. In fact neuroscience (the scientific study of the nervous system) hypothesizes that consciousness is the result of the interoperation of various parts of the brain called “neural correlates of consciousness” (NCC). This idea suggests that at this time we do not completely understand how the human brain processes consciousness or becomes self-aware.

Is it possible for a machine to be self-conscious? Obviously, since we do not completely understand how the human brain processes consciousness to become self-aware, it is difficult to definitively argue that a machine can become self-conscious or obtain what is termed “artificial consciousness” (AC). This is why AI experts differ on this subject. Some AI experts (proponents) argue it is possible to build a machine with AC that emulates the interoperation (i.e., it works like the human brain) of the NCC. Opponents argue that it is not possible because we do not fully understand the NCC. To my mind they are both correct. It is not possible today to build a machine with a level of AC that emulates the self-consciousness of the human brain. However, I believe that in the future we will understand the human brain’s NCC interoperation and build a machine that emulates it. Nevertheless this topic is hotly debated.

Opponents argue that many physical differences exist between natural, organic systems and artificially constructed (e.g., computer) systems that preclude AC. The most vocal critic who holds this view is American philosopher Ned Block (1942– ), who argues that a system with the same functional states as a human is not necessarily conscious.

The most vocal proponent who argues that AC is plausible is Australian philosopher David Chalmers (1966– ). In his unpublished 1993 manuscript “A Computational Foundation for the Study of Cognition,” Chalmers argues that it is possible for computers to perform the right kinds of computations that would result in a conscious mind. He reasons that computers perform computations that can capture other systems’ abstract causal organization. Mental properties are abstract causal organization. Therefore computers that run the right kind of computations will become conscious.

Source:  The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

When Will an Artificially Intelligent Machine Display and Feel Human Emotions? Part 1/2

Affective computing is a relatively new science. It is the science of programming computers to recognize, interpret, process, and simulate human affects. The word “affects” refers to the experience or display of feelings or emotions.

While AI has achieved superhuman status in playing chess and quiz-show games, it does not have the emotional equivalence of a four-year-old child. For example a four-year-old may love to play with toys. The child laughs with delight as the toy performs some function, such as a toy cat meowing when it is squeezed. If you take the toy away from the child, the child may become sad and cry. Computers are unable to achieve any emotional response similar to that of a four-year-old child. Computers do not exhibit joy or sadness. Some researchers believe this is actually a good thing. The intelligent machine processes and acts on information without coloring it with emotions. When you go to an ATM, you will not have to argue with the ATM regarding whether you can afford to make a withdrawal, and a robotic assistant will not lose its temper if you do not thank it after it performs a service. Highly meaningful human interactions with intelligent machines, however, will require that machines simulate human affects, such as empathy. In fact some researchers argue that machines should be able to interpret the emotional state of humans and adapt their behavior accordingly, giving appropriate responses for those emotions. For example if you are in a state of panic because your spouse is apparently having a heart attack, when you ask the machine to call for medical assistance, it should understand the urgency. In addition it will be impossible for an intelligent machine to be truly equal to a human brain without the machine possessing human affects. For example how could an artificial human brain write a romance novel without understanding love, hate, and jealousy?

Progress concerning the development of computers with human affects has been slow. In fact this particular computer science originated with Rosalind Picard’s 1995 paper on affective computing (“Affective Computing,” MIT Technical Report #321, abstract, 1995). The single greatest problem involved in developing and programming computers to emulate the emotions of the human brain is that we do not fully understand how emotions are processed in the human brain. We are unable to pinpoint a specific area of the brain and scientifically argue that it is responsible for specific human emotions, which has raised questions. Are human emotions byproducts of human intelligence? Are they the result of distributed functions within the human brain? Are they learned, or are we born with them? There is no universal agreement regarding the answers to these questions. Nonetheless work on studying human affects and developing affective computing is continuing.

There are two major focuses in affective computing.

1. Detecting and recognizing emotional information: How do intelligent machines detect and recognize emotional information? It starts with sensors, which capture data regarding a subject’s physical state or behavior. The information gathered is processed using several affective computing technologies, including speech recognition, natural-language processing, and facial-expression detection. Using sophisticated algorithms, the intelligent machine predicts the subject’s affective state. For example the subject may be predicted to be angry or sad.

2. Developing or simulating emotion in machines: While researchers continue to develop intelligent machines with innate emotional capability, the technology is not to the level where this goal is achievable. Current technology, however, is capable of simulating emotions. For example when you provide information to a computer that is routing your telephone call, it may simulate gratitude and say, “Thank you.” This has proved useful in facilitating satisfying interactivity between humans and machines. The simulation of human emotions, especially in computer-synthesized speech, is improving continually. For example you may have noticed when ordering a prescription by phone that the synthesized computer voice sounds more human as each year passes.

All current technologies to detect, recognize, and simulate human emotions are based on human behavior and not on how the human mind works. The main reason for this approach is that we do not completely understand how the human mind works when it comes to human emotions. This carries an important implication. Current technology can detect, recognize, simulate, and act accordingly based on human behavior, but the machine does not feel any emotion. No matter how convincing the conversation or interaction, it is an act. The machine feels nothing. However, intelligent machines using simulated human affects have found numerous applications in the fields of e-learning, psychological health services, robotics, and digital pets.

It is only natural to ask, “Will an intelligent machine ever feel human affects?” This question raises a broader question: “Will an intelligent machine ever be able to completely replicate a human mind?” We will address this question in part 2.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A stylized blue and white vintage microphone with musical notes in the background.

“The Artificial Intelligence Revolution” Interview Featured On Blog Talk Radio

My interview on Johnny Tan’s program (From My Mama’s Kitchen®) is featured as one of “Today’s Best” on Blog Talk Radio’s home page. This is a great honor. Below is the player from our interview. It displays a slide show of my picture as well as the book cover while it plays the interview.

Discover Moms and Family Internet Radio with FMMK Talk Radio on BlogTalkRadio