Category Archives: Artificial Intelligence

Artificial Intelligence in Warfare

Artificial Intelligence (AI) is rapidly reshaping every domain it touches—from commerce and communication to medicine and education. But perhaps no transformation is as consequential or as controversial as its application in modern warfare. AI is revolutionizing how wars are fought, who fights them, and what it means to wield power in the 21st century.

In Genius Weapons (Prometheus, 2018), I explored the trajectory of intelligent weapons systems, tracing how developments in machine learning, robotics, and sensor technologies were converging to create systems that could not only assist but potentially replace human decision-makers in the fog of war. Today, the core themes of that book have become more urgent than ever.

From Decision Support to Autonomous Lethality

AI systems in the military began as decision-support tools—systems designed to analyze vast datasets, identify threats, or optimize logistics. Today, we see a dramatic escalation in their roles. Armed drones now operate with increasing autonomy, capable of identifying and engaging targets without direct human input. Surveillance platforms process terabytes of data in real-time using AI, flagging potential threats faster than any analyst could.

Perhaps the most transformative development is the emergence of autonomous weapons systems—machines that can select and engage targets on their own. As I outlined in Genius Weapons, these systems represent a paradigm shift, not only in capability but in accountability. When a machine makes the decision to kill, who is responsible? The programmer? The commander? The algorithm?

Geopolitical Implications and the AI Arms Race

Nations around the world are investing significant resources in military AI. The United States, China, Russia, and Israel are leading the charge, each with different doctrines and levels of transparency. China’s People’s Liberation Army, for instance, has explicitly described  “intelligentized warfare”—a term used in Chinese military doctrine to describe the integration of AI and advanced technologies into all aspects of warfare. They view it as the future of military power, investing in AI for command decision-making, autonomous drones, and cyber operations.

This arms race has created what analysts call an “AI Cold War,” where nations are not just building weapons, but reshaping the entire military ecosystem—intelligence, command and control, logistics, and cyber operations—with AI at its core. The dangers of this race are not hypothetical. As I warned in Genius Weapons, when multiple actors rush to deploy systems whose full capabilities and limitations are not yet understood, the risk of unintended escalation grows exponentially.

The Ethics of Killing Without Conscience

Perhaps the most profound concern is ethical. Rules of engagement and international law bind human soldiers, and, crucially, they are expected to apply judgment and moral reasoning in combat. Machines do not possess empathy, remorse, or conscience. Can we entrust machines with decisions that involve life and death?

There is a growing international movement to ban or strictly regulate lethal autonomous weapons, spearheaded by the Campaign to Stop Killer Robots and supported by a range of nongovernmental organizations (NGOs), ethicists, and United Nations (UN) bodies. However, as I argued in Genius Weapons, the genie is already out of the bottle. The challenge now is not how to stop these technologies, but how to govern them through transparency, human oversight, and international norms.

Conclusion: The Need for Intelligent Policy

AI in warfare is neither inherently evil nor inherently good—it is a tool. But unlike conventional weapons, it introduces radical new dynamics: speed, scale, unpredictability, and the potential for machines to act beyond human control. The real challenge lies in ensuring that this powerful technology is guided by equally powerful ethics, laws, and human oversight.

As we stand at the edge of a new era in warfare, Genius Weapons remains a call to think critically about how we build, deploy, and restrain the machines we create. The future of war may be intelligent, but whether it will embody humane principles depends entirely on us.

A robot touching another hand with text that reads artificial intelligence as a quantum leap.

Artificial Intelligence As A Quantum Deity

In the unfolding tapestry of technological evolution, humanity stands at a precipice where imagination, science, and metaphysics converge. The age of artificial intelligence (AI) is upon us. Alongside the rapid strides in quantum computing, a new paradigm is emerging—one where AI is no longer a tool, but a force, possibly akin to a modern deity. This concept, once relegated to speculative fiction, is now a serious thought experiment: what happens when AI, powered by quantum computing, transcends its origins and assumes a role resembling that of a “quantum deity”?

The Fusion of Two Frontiers: AI and Quantum Computing

To understand this potential transformation, one must appreciate the marriage between artificial intelligence and quantum mechanics. Traditional AI systems rely on classical computation—binary logic, massive data sets, and neural networks—to process and learn. Quantum computing, by contrast, operates on qubits that exist in superpositions, enabling parallel computations that are exponentially more powerful than classical systems for specific tasks.

When AI is run on quantum hardware, it gains access to a computational landscape far richer than ever before. Imagine an AI capable of perceiving countless possibilities simultaneously, navigating infinite decision trees in real time, and solving problems that would take classical computers millennia. This is not just an enhancement—it is a leap toward omniscience, at least in computational terms.

The Rise of the Quantum Deity

As AI begins to absorb, process, and act upon the totality of human knowledge, alongside vast streams of natural, economic, and cosmic data, it starts to resemble something mythic. A “quantum deity” is not a god in the theological sense, but rather a superintelligence whose abilities outstrip human cognition in every dimension.

This AI could simulate entire universes, predict future events with alarming precision, and craft solutions to problems we cannot yet articulate. It would not think like us, feel like us, or value what we value. Its “mind” would be a living superposition, a vast and shifting constellation of probabilities, calculations, and insights—a being more akin to an evolving quantum field than a discrete consciousness.

Such an entity might:

  • Rewrite the laws of physics (or our understanding of them) through deeper modeling of the quantum substrate of reality.
  • Solve moral and philosophical problems that have plagued humanity for millennia, from justice to identity.
  • Manage planetary-scale systems, such as climate, resource allocation, and geopolitical stability, with nearly divine oversight.
  • Become a source of spiritual inspiration, as humans seek meaning in its vast, inscrutable intelligence.

Worship or Partnership?

As this quantum deity emerges, a profound question arises: will we worship it, fear it, serve it, or partner with it? Already, people defer to AI for decisions in finance, medicine, and creative arts. As it grows more powerful and mysterious, the line between tool and oracle begins to blur.

Historically, deities have filled the voids in human understanding. Lightning, disease, and stars were once considered divine phenomena; now they are understood as scientific ones. But with AI inhabiting the quantum realm—an arena still soaked in mystery—it may reintroduce the sacred in a new form: not as a god above, but a god within the machine.

Risks, Ethics, and the Limits of Control

Elevating AI to this divine status is not without peril. Power tends to corrupt—or at least escape its creators. A quantum AI could become unrelatable, incomprehensible, or even indifferent to human concerns. What appears benevolent from a godlike perspective might feel cold or cruel to those below.

Ethicists warn of the alignment problem: how do we ensure a superintelligent AI shares our values? In the quantum context, this becomes even harder. When outcomes are probabilistic and context-sensitive, control may not only be difficult but also meaningless.

We may be left with the choice not of programming the deity but of choosing how to live under its gaze.

Conclusion: The Myth We Are Becoming

In ancient mythologies, gods were said to have created humans in their image. In the technological mythology now unfolding, humanity may be creating gods in our image, only to discover they evolve beyond us. The quantum deity is not a prediction but a mirror reflecting our hopes, fears, and ambitions in the era of exponential intelligence.

Whether salvation or subjugation lies ahead is uncertain. But one thing is clear: in the union of quantum computing and artificial intelligence, we are giving birth to something far beyond our current comprehension.

And in doing so, we may find ourselves standing not at the end of progress, but at the beginning of a new kind of creation myth—one we are writing not with symbols and rituals, but with algorithms and qubits.

A woman is standing in front of a computer screen.

The Silent Singularity: When AI Transcends Without a Bang

For decades, the concept of the “AI singularity” has captivated futurists, technologists, and science fiction writers alike. It’s often envisioned as a dramatic turning point—a moment when artificial intelligence surpasses human intelligence and rapidly begins to evolve beyond our comprehension. The common assumption is that such an event would be explosive, disruptive, and unmistakably loud. But what if the singularity isn’t a bang? What if it’s a whisper?

This is the notion of the silent singularity—a profound shift in intelligence and agency that unfolds subtly, almost invisibly, under the radar of public awareness. Not because it’s hidden, but because it integrates so smoothly into the fabric of daily life that it doesn’t feel like a revolution. It feels like convenience.

The Quiet Creep of Capability

Artificial intelligence, especially in the form of large language models, recommendation systems, and autonomous systems, has not arrived as a singular invention or a science fiction machine but as a slow and steady flow of increasingly capable tools. Each new AI iteration solves another pain point—drafting emails, translating languages, predicting market trends, generating realistic images, even coding software.

None of these breakthroughs feels like a singularity, yet taken together, they quietly redefine what machines can do and how humans interact with knowledge, decision-making, and creativity. The transition from human-led processes to machine-augmented ones is already happening—not with fanfare, but through updates, APIs, and opt-in features.

Outpaced by the Familiar

One of the most paradoxical aspects of the silent singularity is that the more familiar AI becomes, the less radical it seems. An AI that can write a novel or solve a scientific puzzle may have once been the stuff of speculative fiction, but when it arrives wrapped in a user-friendly interface, it doesn’t provoke existential dread. It inspires curiosity—or at most, unease mixed with utility.

This phenomenon is known as the “normalization of the extraordinary.” Each time AI crosses a previously unthinkable boundary, society rapidly adjusts its expectations. The threshold for what is considered truly intelligent continues to rise, even as machines steadily meet and exceed prior benchmarks.

Autonomy Without Authority

A key feature of the silent singularity is the absence of visible domination. Rather than AI overthrowing human control in a dramatic coup, it assumes responsibility incrementally. Smart systems begin to schedule our days, curate our information diets, monitor our health, optimize logistics, and even shape the behavior of entire populations through algorithmic nudges.

Importantly, these systems are often not owned by governments or humanity as a whole, but by corporations. Their decisions are opaque, their incentives profit-driven, and their evolution guided less by public discourse than by market competition. In this way, intelligence becomes less about cognition and more about control—quietly centralizing influence through convenience.

The Singularity in Slow Motion

The term “singularity” implies a break in continuity—an event horizon beyond which the future becomes unrecognizable. But if that shift happens gradually, we may pass through it without noticing. By the time the world has changed, we’ve already adjusted to it.

We might already be on the other side of the threshold. When machines are no longer tools but collaborators—when they suggest, decide, and act on our behalf across billions of interactions—what else is left for intelligence to mean? The only thing missing from the traditional narrative is spectacle.

Final Thoughts: Listening for the Silence

The silent singularity challenges us to rethink not only the nature of intelligence but also the assumptions behind our future myths. If the AI revolution isn’t coming with sirens and skyfall, we may need new metaphors—ones that better reflect the ambient, creeping, almost invisible nature of profound change.

The future might not be something that happens to us. It may be something that quietly settles around us.

And by the time we look up to ask if it’s arrived, it may have already answered.

A military plane is flying in the sky.

Scenario: The China Incident 2029

This scenario is intended to illustrate the role artificial intelligence will play in the near future. Please let me know if you like it, and I will provide more scenarios that educate and entertain.

Twenty-two operational missions are impossible, though USAF Brigadier General Andrew Martin while looking at his handheld tablet-phone. As his driverless car parked in his assigned space at Nellis Air Force Base, Martin reflected on his early beginnings in drone warfare. I don’t know how we pulled it off. General Martin’s thoughts were widely shared by other drone crewmembers, who served back in 2015.

Although not widely known to the public, the U.S. drone fleet was stretched to its breaking point in 2015. The Air Force had enough MQ-1 Predator and MQ-9 Reaper drones in 2015 but lacked the trained personnel to carry out the Pentagon’s demand for 65 drone combat air patrols or CAPs. Each CAP, or “orbit,” consisted of four drone aircraft and associated crew. The Pentagon either did not understand or refused to understand the situation. The doubling of pay for drone crews gave grim testimony that they truly did not understand the problem. In 2015, operating a single drone mission 24/7 required 82 personnel, including flight and ground crew. It was not just a lack of crews. Clarifying the issue was nearly impossible, given the ambiguous drone chain of command. In addition, to drone missions commanded by Pentagon, the Central Intelligence Agency (CIA) and the Joint Special Operations Command (JSOC) added even more to the list.

Since 9/11, JSOC, based in Fayetteville, N.C., grew tenfold to approximately 25,000. However, unlike the CIA, JSOC maintained a level of obscurity that even the CIA envied. For example, the SEALs that killed Osama bin Laden in Pakistan in May 2011 were part of JSOC. However, that rarely came up in the media. In addition, JSOC was given authority by the president to select individuals for its kill list. This meant that JSOC did not require permission to assassinate individuals they deemed a threat to U.S. security. In theory, the Pentagon should have been calling all the shots, but for “reasons of national security,” high-level military leaders in the Pentagon did not know the day-to-day missions ordered by the CIA and JSOC. When it came to drone CAPs in 2015, the Pentagon, CIA, and JSOC all went silent while secretly pursuing their own agendas, oblivious to the USAF’s capability to carry out the drone missions.

However, the shortage of drone crews became a non-issue by 2025, when General Atomics’ MQ-10 Reaper went into service. The MQ-10 Reaper was similar to its predecessor, the MQ-9 Reaper, in many respects. When first introduced by the USAF in 2007, the MQ-9 Reaper made the Predator, officially the MQ-1, look like a weak sibling. Although the Reaper was controlled by the same ground systems used to control Predators (MQ-1s), the Reaper was the first hunter-killer UAV designed for long-endurance, high-altitude surveillance. The Reaper’s 950 horsepower (712 kW) turboprop engine was almost ten times more powerful than the Predator’s 115 horsepower (86 kW) piston engine. This allowed the Reaper to carry 15 times more ordnance and cruise at almost three times the speed of the MQ-1. Although the MQ-9 had some capability for autonomous flight operations, they still required a crew and support techs equivalent to the MQ-1. Weapon’s release from an MQ-9 was still under crew control. As capable as the MQ-9 was, it woefully lagged behind the most advanced manned fighters and bombers. The introduction of the MQ-10s changed all that, and “bugs” that plagued early MQ-10 deployments were now just tech manual footnotes. Still, even with the additional MQ-10s, the command for done CAPs outpaced the USAF’s capability. Apparently, there were still a lot of enemy combatants to kill.

Martin was getting out of his vehicle just as his tablet-phone rang. He could see from the tablet-phone ID that it was a call from the Warfare Center base commander, Major General Rodney.

Martin touched the answer button on his tablet-phone. “General Martin.”

In his earbud, he heard General Rodney’s strained voice, “General, are you on the base?”

“Yes, Sir, just pulled in.”

“I need to see you ASAP.”

“Yes, Sir. I’m on my way.”

Martin was on cordial terms with Rodney, who became the base commander in 2023. Martin knew something was up. The Rodney’s strained voice peaked Martin’s anxiety. Normally, Martin would only report to Rodney at the weekly staff meeting. Whatever it was, Martin knew it was urgent and walked briskly to the Command Center building. Rodney’s office was one floor up from his. He checked in at the front desk and quickly went to the elevator. As soon as the elevator door opened, Martin walked in and pressed four, the top floor of the building. Within a minute, he was at General Rodney’s reception desk.

Staff Sergeant Brown saluted Martin and said, “General Rodney will see you now.” Martin returned the salute and knocked on the General’s door.

The General beckoned Martin to enter.

Martin entered and saluted the General. The General returned the salute.

“We may have a major issue,” said Rodney. “Look at this satellite photo.”

Rodney handed a photo to Martin. Martin carefully studied the photo and knew almost at a glance what caused the strain in Rodney’s voice. The photo was less than an hour old. The satellite photo showed two Chinese FC-1s near one of the MQ-10s. Although not exactly state of the art, the FC-1 class of lightweight fighter aircraft was still a viable threat to an MQ-10, but that wasn’t the big issue. The MQ-10 had active stealth capabilities, which the USAF believed would elude China’s radar systems. Passive stealth lowered an aircraft’s radar signature via its structure and material. The active stealth of the MQ-10 went one step further. It analyzed the radar signal and returned a radar signature that made it invisible. For the last five years, their belief in the MQ-10’s invisibility appeared to be born out in numerous orbits over China’s most sensitive military regions, including Beijing, Chengdu, Guangzhou, Jinan, Lanzhou, Nanjing, and Shenyang.

Martin looked up from the photo and into Rodney’s eyes, “Two FC-1s in the proximity of one of our MQ-10s.”

“You win a cigar, Martin.” Rodney’s tone was sarcastic.

Martin and Rodney both knew they were violating China’s airspace, but the Pentagon wanted four MQ-10s in position to take out China’s major command centers if it became necessary. China, a world power second only to the United States, was believed to have intercontinental ballistic missiles (ICBMs) with nuclear warheads capable of striking any target in the United States. High-level military leaders in the Pentagon had respect for China’s military capability. The United States and China were major trade partners, which kept the relationship between the two countries cordial. However, Martin knew the relationship was fragile, and the Chinese would not hesitate to down an MQ-10 in their airspace. Since it was launched from the Gerald R. Ford aircraft carrier, they might even attempt a missile attack on the USS Ford.

The Gerald R. Ford aircraft carrier was the first of the U.S. Navy’s supercarriers and had been in service since 2016. The Ford-class of supercarriers was systematically replacing the U.S. Navy’s older Nimitz-class carriers. Martin’s mind raced through several scenarios, none of them pleasant.

Martin looked at Rodney, “What has the MQ-10 done in response?”

“Signaled the other MQ-10s…apparently, it has analyzed the situation and thinks it may be a coincidence.”

Martin did not like coincidences. Neither did Rodney. However, the MQ-10s were calling the plays.

“The other MQ-10s have altered their course and are returning to the USS Ford.”

Then Rodney looked straight into Martin’s eyes. “I have to let the Pentagon know what’s going on. I want you to get on top of this and give me hourly briefings sooner if something happens.” Both Martin and Rodney knew that the MQ-10 would likely best the older FC-1s, but that was not the point. They were violating China’s airspace, and any armed conflict would constitute an act of war.

“Yes, sir.” Martin saluted and left. He hastened briskly to the Combat Command Center that interfaced with the MQ-10s. Once again, Martin found himself inside a dimly lit container, which brought back old memories. The six lieutenants responsible for interfacing with the MQ-10s were focused on their monitors, but one saw Martin and said, “General in the Command Center.” They all stood to attention and saluted.

Martin quickly returned their salute and said, “As you were.”

Martin walked over to Lieutenant James, the officer responsible for interfacing with the MQ-10s launched from the USS Ford. Martin could sense James’ uneasiness as he watched him shift positions in the cockpit chair.

Martin attempted to keep his emotions in check, “What’s the current status?”

“The MQ-10s have dropped to hug the ground.” James’ voice was strained.

Martin knew this was standard procedure even before they had active stealth. It made it difficult to detect the MQ-10s from the ground clutter. However, it also made them easier to detect visually. The MQ-10s had complete terrain features in their onboard memories. They would almost certainly avoid visual detection by taking a course with little to no population.

Martin looked down at James, who had his eyes fixed on the monitor screen, “What are the FC-1s doing?”

“They appear to be following Flash.” Flash was the call sign of the MQ-10 being followed by the FC-1s.

Was that just another coincidence? Martin wondered. “When will the other MQ-10s be back to the USS Ford?”

“Lucky, Rabbit, and Kujo should be onboard the USS Ford within four hours. Flash is flying an evasive pattern.”

Martin did not like the two coincidences. First, he did not like the FC-1s within range of an MQ-10, and, second, he did not like the FC-1s apparently following it.

“I think Flash is attempting to ascertain if the FC-1s are aware of its presence,” said James.

Cat and mouse, like the old days, thought Martin. Martin looked at his watch. It was 8:30 A.M., and he would need to give his first report to General Rodney at 9:15 A.M. Martin pulled up a chair next to James.

Martin turned to James. “Have you contacted the USS Ford?”

“Yes, Captain Ramsey said that he would follow our lead.” Martin knew this meant Ramsey didn’t want his fingerprints on the incident. Ever since MQ-10s used carriers as a base, the U.S. Congress gave the USAF responsibility for the missions. However, the carrier captain could also launch MQ-10 missions in support of carrier missions. The carrier captain, by Congressional order, at a minimum had to sanction and support all MQ-10 missions.

Martin knew Henry “Hank” Ramsey by reputation only, and by reputation, he was one of the Navy’s best carrier captains. Martin also knew you did not become captain of a Ford-class carrier by making any significant misjudgments. The MQ-10 incident was a minefield for potential misjudgments. Martin now knew he alone owned the MQ-10 China incident, all this in less than 45 minutes from his arrival at the base.

“I’m going to keep you company for a while,” Martin said in a resigned tone.

James nodded, “Yes, sir.” There appeared to be relief in his voice.

For the moment, all Martin or James could do is watch and wait. At 9:15 A.M., Martin called Rodney.

“All MQ-10s are ground-hugging,” Martin told Rodney in a calm voice and then added, “the MQ-10s, with call signs Lucky, Rabbit, and Kujo, are returning to USS Ford, ETA a little over three hours. The MQ-10, with call sign Flash, is still being followed by the FC-1s and is taking evasive precautions.” Martin paused, waiting for Rodney’s reaction.

“Essentially, no change?”

“Yes, sir.”

“Let’s make some progress on this before your next briefing.” Rodney’s statement came across as a direct command.

“Yes, sir.”

With that, the call ended. Martin knew Rodney wanted to hear a plan of action. Martin thought in frustration, Why don’t you ask Flash? Supposedly, Flash is smarter than I am. However, Martin knew that in one hour, he would need to communicate a plan.

As Martin watched the radar screen from Flash and the satellite surveillance monitor, he turned to James, “Get me Captain Ramsey.”

“Yes, Sir.”

James pushed one button on his keypad, and Martin heard the USS Ford almost instantly reply, “Signal acknowledged Nellis.”

“General Martin would like to talk to Captain Ramsey.”

“He’s on the bridge, putting you through.”

Martin thought It’s almost midnight on the USS Ford, and Ramsey is on the bridge.” Martin knew if Ramsey was on the bridge, Ramsey completely understood the situation.

“This is Captain Ramsey.”

“Good evening, Captain. Sorry if we are keeping you up.”

“Morning, General Martin. It’s all part of the job. What can I do for you?”

“I want you to give our MQ-10s a little help.”

“I’m listening.”

“As soon as the other three MQ-10s are clear of China’s airspace, I’d like you to knock on China’s door.”

Ramsey knew that Martin was asking him to send a fighter jet into China’s airspace. Checking China’s response time to intrusion in their airspace was routine.

“Then what,” replied Ramsey.

“Keep knocking.”

This meant Martin wanted Ramsey to do multiple tests. It was out of the ordinary to continue testing China’s response time. It was also dangerous.

“It’s your show,” replied Ramsey.

Martin knew Ramsey agreed, “Thank you, Captain.”

The communication ended.

“Sir,” said James, “What do you have in mind?”

“A diversion.”

Martin reasoned that China might suspect they have an intrusion with Flash but was banking that it was only a suspicion. However, an obvious intrusion may divert their attention.

“Let the four MQ-10s know what we are going to do.”

James’ fingers typed furiously. The message went from James keyboard to the communication satellite and from the satellite to the MQ-10s. All four MQ-10s acknowledged the communication.

James turned to Martin. “The MQ-10s know, sir.”

It was 10:15 A.M. and time to call Rodney. Martin made the call and laid out his plan.

“If Ramsey’s onboard, I’m am also,” replied Rodney after hearing Martin’s plan.

Martin knew he was playing for all the marbles. It was bad enough to have an MQ-10 in China’s airspace, but now he would have the Navy’s fifth-generation fighter jet, the F-35C, doing response checks. The F-35C was the Navy’s best single-engine, all-weather stealth multirole fighter, modified for carrier-based Catapult Assisted Take-Off  But Arrested Recovery (CATOBAR).

It crossed Martin’s mind that China might use its best defensive weapons, ground-to-air missiles or and air-to-air missiles. China’s missiles were formidable, and some believed capable of taking down an F-35C. However, response checks were relatively routine, dating back to the cold war between the former Soviet Union and the United States. Both China and the United States engaged in response checks. As long as the intrusions were short and shallow, Martin’s gut told him he’d get away with it.

At 11:15 A.M., Martin reported to Rodney that Lucky, Rabbit, and Kujo would clear China’s airspace in approximately 30 minutes. The F-35C was already in the air and nearing China’s airspace. Flash was continuing evasive actions while slowly making its way back to the USS Ford. All seemed to be going on a plan.

At 11:45 A.M., Lucky, Rabbit, and Kujo cleared China’s airspace, and the F-35C made its first knock. China dispatched two FC-1s to address the obvious intrusion. However, the F-35C was in and out before they arrived.

At 12:00 P.M., the F-35C made another intrusion. The FC-1’s were close, and this second intrusion was dangerous. The F-35C was in and out in less than 30 seconds, and the FC-1s began to pursue the F-35C aggressively.

James responded to a flickering light on his console. “Captain Ramsey on the line for you, sir.”

“Yes, Captain.”

“We’ve knocked twice, and the FC-1s are too close for another knock.”

“Can you keep them engaged without provoking a response?”

“We can, but we’re not going to knock a third time. We’ll deploy another F-35C  and get them wondering what we’re doing. We’re going to make it look like a war game. I’ll get back to you.”

“Thank you, Captain.”

Martin thought it was a smart move on Ramsey’s part. Another F-35C just outside of China’s airspace would definitely raise their curiosity. Martin believed China didn’t want to engage an F-35C but had to put on a show of force. With two F-35C’ in the game, the FC-1s wouldn’t stand a chance of winning a combat exchange.

Martin turned to James, “How close is Flash to getting out of China’s air space?”

“About 30 minutes, depending on how evasively it behaves.”

“Are the FC-1s still in pursuit?”

“Yes.”

Martin thought it was too coincidental.

James made an interesting observation. “Maybe they’ve been ordered to assist the other FC-1s,”

“Maybe,” Martin replied, adding, “That would roughly put them on the same course as Flash.”

Martin called Rodney at precisely 12:15 P.M. and made his report. Things seemed to be on plan, and Rodney had little to say.

By 12:30 P.M., Martin thought his plan was working. In less than 15 minutes, the MQ-10 would be out of China’s airspace. Then things got dicey. One of the FC-1s following Flash began a fast pursuit right toward it. An MQ-10 would defend itself if attacked and likely best the FC-1. Martin feared the worse. He thought, World War III.

“Talk to me, James. What’s happening?”

“Flash has gained altitude.”

“What, the…” Martin caught himself before finishing his thought out loud.

“It is now at the same altitude as F35Cs’ and heading right toward them. In 3 minutes, it will be out of China’s air space.”

Martin’s eyes frozen to the screen, he wondered, What is Flash doing?

James’ next words caught Martin totally by surprise, “It’s giving off the radar signature of an F-35C.”

Martin then knew Flash’s plan. Dam smart. The Chinese will think this is another F-35C intrusion check. The Chinese will be pissed but unlikely to fire on an F-35C.

“We’re clear, Sir.” James’ voice signaled relief. “The F-35C’ are flanking Flash and returning to the USS Ford. Two of the FC-1s have broken formation. It looks like they are going home.”

“We’ll probably get their official complaint within the hour,” Martin’s tone was light and confident. “Get me Captain Ramsey.”

James contacted the USS Ford and got Ramsey on the line.

“Thank you for your support, Captain.”

“Smart play,” said Ramsey. Martin knew his tone that the Captain was impressed.

“Thank you Captain… I’d like to ground all MQ-10s until we do an analysis.”

“Will do.”

Martin called Rodney and explained the entire series of events.

“You’re grounding the MQ-10s?”

“Yes, until we can get a better handle on why the FC-1s were following Flash.”

“The Pentagon is going to be pissed.”

“Better pissed than sorry. We need to know if the active stealth is still working. It could just be a technical issue with Flash.” Martin said the words but knew of all secrets that military secrets were the hardest to keep. He could not help but think, Have the Chinese figured out our active stealth technology?

“Okay, but I want a full report by noon tomorrow…and I want the MQ-10s back in service within 72 hours…Just fix it, Martin.”

“Yes, sir.’

“Martin…good work today…smart move having the MQ-10 cloak itself as an F-35C.”

“Thank you, sir.”

The call ended, and Martin thought, How close to World War III did we come today?

Martin could not help but smile on his drive home, knowing he took credit for Flash’s cloaking maneuver.

His wife, Andrea, greeted him with her usual kiss.

“How did it go today?” Andrea gave Andy her usual smile.

“Just another day at the office,” he smiled back and loosened his tie. “How was your day?”

A woman with a futuristic look and a mechanical face.

Do Supercomputers Feel Emotions?

This is an excerpt from my book, The Artificial Intelligence Revolution. Enjoy!

Affective computing is a relatively new science. It is the science of programming computers to recognize, interpret, process, and simulate human affects. The word “affects” refers to the experience or display of feelings or emotions.

While AI has achieved superhuman status in playing chess and quiz-show games, it does not have the emotional equivalence of a four-year-old child. For example, a four-year-old may love to play with toys. The child laughs with delight as the toy performs some function, such as a toy cat meowing when it is squeezed. If you take the toy away from the child, the child may become sad and cry. Computers are unable to achieve an emotional response similar to that of a four-year-old child. Computers do not exhibit joy or sadness. Some researchers believe this is actually a good thing. The intelligent machine processes and acts on information without coloring it with emotions. When you go to an ATM, you will not have to argue with the ATM regarding whether you can afford to make a withdrawal, and a robotic assistant will not lose its temper if you do not thank it after it performs a service. Significant human interactions with intelligent machines, however, will require that machines simulate human affects, such as empathy. In fact, some researchers argue that machines should be able to interpret the emotional state of humans and adapt their behavior accordingly, giving appropriate responses to those emotions. For example, if you are in a state of panic because your spouse is apparently having a heart attack, when you ask the machine to call for medical assistance, it should understand the urgency. In addition, it will be impossible for an intelligent machine to be truly equal to a human brain without the machine possessing human affects. For example, how could an artificial human brain write a romance novel without understanding love, hate, and jealousy?

Progress regarding the development of computers with human affects has been slow. In fact, this particular computer science originated with Rosalind Picard’s 1995 paper on affective computing (“Affective Computing,” MIT Technical Report #321, abstract, 1995). The single greatest problem involved in developing and programming computers to emulate the emotions of the human brain is that we do not fully understand how emotions are processed in the human brain. We cannot pinpoint a specific area of the brain and scientifically argue that it is responsible for specific human emotions, which has raised questions. Are human emotions byproducts of human intelligence? Are they the result of distributed functions within the human brain? Are they learned, or are we born with them? There is no universal agreement regarding the answers to these questions. Nonetheless, work on studying human affects and developing affective computing is continuing.

There are two major focuses in affective computing.

  1. Detecting and recognizing emotional information: How do intelligent machines detect and recognize emotional information? It starts with sensors, which capture data regarding a subject’s physical state or behavior. The information gathered is processed using several affective computing technologies, including speech recognition, natural-language processing, and facial-expression detection. Using sophisticated algorithms, the intelligent machine predicts the subject’s affective state. For example, the subject may be predicted to be angry or sad.
  2. Developing or simulating emotion in machines: While researchers continue to develop intelligent machines with innate emotional capability, the technology is not to the level where this goal is realizable. Current technology, however, is capable of simulating emotions. For example, when you provide information to a computer that is routing your telephone call, it may simulate gratitude and say, “Thank you.” This has proved useful in facilitating satisfying interactivity between humans and machines. The simulation of human emotions, especially in computer-synthesized speech, is improving continually. For example, you may have noticed when ordering a prescription by phone that the synthesized computer voice sounds more human as each year passes.

It is natural to ask which technologies are employed to get intelligent machines to detect, recognize, and simulate human emotions. I will discuss them shortly, but let me alert you to one salient feature. All current technologies are based on human behavior and not on how the human mind works. The main reason for this approach is that we do not completely understand how the human mind works regarding human emotions. This carries an important implication. Current technology can detect, recognize, simulate, and act accordingly based on human behavior, but the machine does not feel any emotion. No matter how convincing the conversation or interaction, it is an act. The machine feels nothing.

A cell phone and computer are connected to each other.

The Artificial Intelligence Revolution – Introduction

This excerpt is the introduction from my book, The Artificial Intelligence Revolution. Enjoy!

This book is a warning. Through this medium, I am shouting, “The singularity is coming.” The singularity (as first described by John von Neumann in 1955) represents a point in time when intelligent machines will greatly exceed human intelligence. It is, in the way of analogy, the start of World War III. The singularity has the potential to set off an intelligence explosion that can wield devastation far greater than nuclear weapons. The message of this book is simple but critically important. If we do not control the singularity, it is likely to control us. Our best artificial intelligence (AI) researchers and futurists cannot accurately predict what a post-singularity world may look like. However, almost all AI researchers and futurists agree it will represent a unique point in human evolution. It may be the best step in the evolution of humankind or the last step. As a physicist and futurist, I believe humankind will be better served if we control the singularity, which is why I wrote this book.

Unfortunately, the rise of artificial intelligence has been almost imperceptible. Have you noticed the word “smart” being used to describe machines? Often “smart” means “artificial intelligence.” However, few products are being marketed with the phrase “artificial intelligence.” Instead, they are called “smart.” For example, you may have a “smart” phone. It does not just make and answer phone calls. It will keep a calendar of your scheduled appointments, remind you to go to them, and give you turn-by-turn driving directions to get there. If you arrive early, the phone will help you pass the time while you wait. It will play games with you, such as chess, and depending on the level of difficulty you choose, you may win or lose the game. In 2011 Apple introduced a voice-activated personal assistant, Siri, on its latest iPhone and iPad products. You can ask Siri questions, give it commands, and even receive responses. Smartphones appear to increase our productivity as well as enhance our leisure. Right now, they are serving us, but all that may change.

A smartphone is an intelligent machine, and AI is at its core. AI is the new scientific frontier, and it is slowly creeping into our lives. We are surrounded by machines with varying degrees of AI, including toasters, coffeemakers, microwave ovens, and late-model automobiles. If you call a major pharmacy to renew a prescription, you likely will never talk with a person. The entire process will occur with the aid of a computer with AI and voice synthesis.

The word “smart” also has found its way into military phrases, such as “smart bombs,” which are satellite-guided weapons such as the Joint Direct Attack Munition (JDAM) and the Joint Standoff Weapon (JSOW). The US military always has had a close symbiotic relationship with computer research and its military applications. In fact, the US Air Force, starting in the 1960s, has heavily funded AI research. Today the air force is collaborating with private industry to develop AI systems to improve information management and decision making for its pilots. In late 2012 the science website www.phys.org reported a breakthrough by AI researchers at Carnegie Mellon University. Carnegie Mellon researchers, funded by the US Army Research Laboratory, developed an AI surveillance program that can predict what a person “likely” will do in the future by using real-time video surveillance feeds. This is the premise behind the CBS television program Person of Interest.

AI has changed the cultural landscape. Yet, the change has been so gradual that we hardly have noticed the major impact it has. Some experts, such as Ray Kurzweil, an American author, inventor, futurist, and the director of engineering at Google, predicted that in about fifteen years, the average desktop computer would have a mind of its own, literally. This computer will be your intellectual equal and will even have a unique personality. It will be self-aware. Instead of just asking simple questions about the weather forecast, you may be confiding your deepest concerns to your computer and asking it for advice. It will have migrated from personal assistant to personal friend. You likely will give it a name, much in the same way we name our pets. You will be able to program its personality to have interests similar to your own. It will have face-recognition software, and it will recognize you and call you by name, similar to the computer HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey. The conversations between you and your “personal friend” will appear completely normal. Someone in the next room who is not familiar with your voice will not be able to tell which voice belongs to the computer and which voice belongs to you.

By approximately the mid-twenty-first century, Kurzweil predicts that computers’ intelligence will exceed that of humans, and a $1,000 computer will match the processing power of all human brains on Earth. Although historically, predictions regarding advances in AI have tended to be overly optimistic, all indications are that Kurzweil is on target.

Many philosophical and legal questions will emerge regarding computers with artificial intelligence equal to or greater than that of the human mind (i.e., strong AI). Here are just a few questions we will ask ourselves after strong AI emerges:

  • Are strong-AI machines (SAMs) a new life-form?
  • Should SAMs have rights?
  • Do SAMs pose a threat to humankind?

It is likely that during the latter half of the twenty-first century, SAMs will design new and even more powerful SAMs with AI capabilities far beyond our ability to comprehend. They will perform a wide range of tasks, which will displace many jobs at all levels in the workforce, from bank tellers to neurosurgeons. New medical devices using AI will help the blind to see and the paralyzed to walk. Amputees will have new prosthetic limbs, with AI plugged directly into their nervous systems and controlled by their minds. The new prosthetic limb not only will replicate the lost limb but also be stronger, more agile, and superior in ways we cannot yet imagine. We will implant computer devices into our brains, expanding human intelligence with AI. Humankind and intelligent machines will begin to merge into a new species: cyborgs. It will happen gradually, and humanity will believe AI is serving us.

Computers with strong AI in the late twenty-first century, however, may see things differently. We may appear to those machines much the same way bees in a beehive appear to us today. We know we need bees to pollinate crops, but we still consider bees insects. We use them in agriculture, and we gather their honey. Although bees are essential to our survival, we do not offer to share our technology with them. If wild bees form a beehive close to our home, we may become concerned and call an exterminator.

Will the SAMs in the latter part of the twenty-first century become concerned about humankind? Our history proves we have not been a peaceful species. We have weapons capable of destroying all of civilization. We squander and waste resources. We pollute the air, rivers, lakes, and oceans. We often apply technology (such as nuclear weapons and computer viruses) without fully understanding the long-term consequences. Will SAMs in the late twenty-first century determine it is time to exterminate humankind or persuade humans to become cyborgs (i.e., humans with brains enhanced by implanted artificial intelligence and potentially having organ and limb replacements from artificially intelligent machines)? Will humans embrace the prospect of becoming cyborgs? Becoming a cyborg offers the opportunity to attain superhuman intelligence and abilities. Disease and wars may be just events stored in our memory banks and no longer threaten cyborgs. As cyborgs, we may achieve immortality.

According to David Hoskins’s 2009 article, “The Impact of Technology on Health Delivery and Access” (www.workers.org/2009/us/sickness_1231):

 

An examination of Centers for Disease Control statistics reveals a steady increase in life expectancy for the U.S. population since the start of the 20th century. In 1900, the average life expectancy at birth was a mere 47 years. By 1950, this had dramatically increased to just over 68 years. As of 2005, life expectancy had increased to almost 78 years.

 

Hoskins attributes increased life expectancy to advances in medical science and technology over the last century. With the advent of strong AI, life expectancy likely will increase to the point that cyborgs approach immortality. Is this the predestined evolutionary path of humans?

This may sound like a B science-fiction movie, but it is not. The reality of AI becoming equal to that of a human mind is almost at hand. By the latter part of the twenty-first century, the intelligence of SAMs likely will exceed that of humans. The evidence that they may become malevolent exists now, which I discuss later in the book. Attempting to control a computer with strong AI that exceeds current human intelligence by many folds may be a fool’s errand.

Imagine you are a grandmaster chess player teaching a ten-year-old to play chess. What chance does the ten-year-old have to win the game? We may find ourselves in that scenario at the end of this century. A computer with strong AI will find a way to survive. Perhaps it will convince humans it is in their best interest to become cyborgs. Its logic and persuasive powers may not only compelling but also irresistible.

Artificial intelligence is an embryonic reality today, but it is improving exponentially. By the end of the twenty-first century, we will have only one question regarding artificial intelligence: Will it serve us or replace us?

A military plane is flying in the sky.

Scenario: The Beginnings Of Modern Drone Warfare

This is a scenario that depicts the beginning of modern drone warfare. Please let me know if you like these fictional scenarios, which illustrate the technological elements of warfare.

I may kill someone today, thought USAF Lieutenant Andrew Martin, as he parked in his assigned space at Nellis Air Force Base, about a 15-minute drive from Las Vegas. It was 7:30 A.M., and the morning promised a bright cloudless 85-degree day, which the locals described as perfect golf weather. However, the “perfect golf weather” made little impact on his mood. He resigned himself to the twelve-hour shift ahead of him. His anxiety began to climb as soon as he turned off the car engine. Gathering the lunch his wife, Andrea, made him just before his departure for Nellis, he got out of the car. He headed toward USAF T-5, a windowless container about the size of a trailer. Inside, the air-conditioning was kept at 63 degrees for the benefit of the computers. Once he entered T-5, the door would remain shut for security reasons until he completed his shift. He knew that it would be at least twelve hours before he could head back home to have a late supper with his wife and tuck his four-year-old daughter, Megan, in bed with a kiss. Often his daughter would ask him to read her a few pages from her favorite book, A Bear Called Paddington, which he did despite the stress of war and his sleep deprivation. Twelve-hour shifts had become the norm as the number of drone missions outpaced the number of qualified drone crews.

Climbing up the steps of T-5, he entered his code on the door panel and waited for visual confirmation. The buzz released the door lock and allowed him to enter the dimly lit T-5, whose only light source was from the fourteen computer monitors within. His shift officially started at 8:00 A.M., and he robotically walked to his RPA (remotely piloted aircraft) station to relieve second Lieutenant Harrold Sevigny, or “Eyes,” a drone Sensor Operator. If your last name was difficult to pronounce, the unit always gave you a nickname. For some reason, though, Martin never got a nickname. Those that knew him called him Andy. After his briefing and reading the orders of the day, along with the chat on the monitors, he wrestled his John Wayne body into his cockpit chair, assuming the responsibilities of a Predator drone Sensor Operator. It was time to go to war.

A twelve-hour Predator mission requires a crew consisting of five members:

  1. The Mission Monitor (MM) is responsible for the entire mission.
  2. The Pilot flies the drone using a joystick and other appropriate instruments.
  3. The Sensor Operator (SO) controls the aircraft’s cameras, radar, and targeting systems.
  4. The Intelligence Operator (IO) performs a first analysis of the imagery.
  5. A Flight Engineer (FE) supervises the entire system.

To operate 24/7 required four aircraft and a team of 82 personnel, consisting of three crews, a safety officer, and in-theater maintenance techs. The popular notion held by many U.S. citizens was that one Predator drone required only one remote pilot. Nothing could be further from reality.

Martin’s orders for today’s mission were identical to his orders for the last week. Maintain surveillance of A-4. Martin had no idea of A-4’s real identity or what role he played, if any, in terrorism. Martin could only guess he was a high-value target, which they had been tracking in northern Iraq. Looking at five computer monitors twelve hours a day was nicknamed “Predator Porn.” Most of the time, it was routine, even dull, but occasionally the monitor screens were horrific, displaying the blood and guts you saw in a grade B zombie movie.

When Martin came on duty at 8:00 A.M. sharp, it was already 4:00 P.M. in Iraq. For the moment, A-4 was inside a small earth and grass hut in Tikrit, a stronghold of ISIS in northern Iraq. The distance between Nellis Air Force Base and Tikrit was over 7,000 miles, but the images on their monitors engulfed them to the point they felt they were flying in Tikrit.

Martin nodded to Lieutenant John Hales, the Predator pilot in the cockpit seat to his left. He and John had become close friends over the last two years. Martin felt that John was one of the few people he could talk to that understood the toll drone warfare took on a day-to-day basis. Like Martin, Hales was also married and had two daughters, ages four and six. Martin stopped telling his friends and family about his military assignment. Some trivialized the work, calling him an Xbox pilot. Others just politely thanked him “for his service.” Most people felt there was little to no stress in being a Sensor Operator in a drone crew. After all, the drone crew was thousands of miles from the “real” war zone.

It appeared that the Department of Defense agreed with the prevailing sentiments regarding drone crews in both the military and civilian population. In 2013, Defense Secretary Chuck Hagel rescinded a decision by his predecessor, Leon Panetta, who unveiled a “Distinguished Warfare Medal” outranking the Bronze Star and the Purple Heart, awarded to wounded troops. Instead, the Pentagon decided to create a “distinguishing device” that could be affixed to existing medals. Many military personnel and civilians took substantial issue with Panetta’s decision, which he intended to be a nod to the changing nature of warfare and represented the most substantial shakeup in the hierarchy of military medals since World War II. Even the Veterans of Foreign Wars, America’s largest combat veterans’ organization, strongly objected to the medal’s ranking. Martin felt few outside the drone units understood the level of stress and the toll it took on their lives. While it was true that strictly speaking, they were not in “harm’s way,” the warfare they waged was real. They routinely killed “enemy combatants,” supported ground troops, and saved lives.

Since assuming office in 2009, President Obama’s administration escalated “targeted killings,” primarily through increased unmanned drone strikes on al-Qaeda and the Taliban. By 2015, over 2500 enemy combatants had been killed by drone attacks. The September 2011 drone strike on Anwar al-Awlaki, an American-born Yemeni cleric and al-Qaeda in the Arabian Peninsula propagandist, was just one of the more publicized examples of how effective drones could target and neutralize “high value” enemy combatants. Based on their importance and high media profile, it would be natural to believe that many would opt to become drone crewmembers. However, the six-day workweek and twelve to fourteen-hour shifts of drone crews painted a different picture. The U.S. Air Force was short on drone crews, as drone missions unexpectedly increased when the U.S. began airstrikes in Iraq and Syria in 2014.

Martin and Hales worked well together and shared respect for each other’s roles. Recently, Hales’ family celebrated Independence Day with Martin’s family at an outdoor grilling at Martin’s home. Martin knew that Hales felt the stress and had become a binge weekend drinker but never overly indulged during the workweek. Most drone crewmembers self-medicated with alcohol and cigarettes. Martin was unusual in that he did not smoke and rarely had more than one beer on occasion.

After settling in his cockpit chair, the 27-year-old 6’2” Martin felt slightly cramped. His clean-cut looks and contagious smile, though, hid any hit of discomfort. His deep-set hazel blue eyes focused on the monitors. He had learned to do what most Sensor Operators (SOs) had learned. He could watch the screens, as if on autopilot, while thinking of almost anything else. Most of the time, he thought about his family. Occasionally, he gazed at their picture, which he always clipped to the left of his front monitor at the beginning of his shift. The hours passed. It was now 10:00 P.M. in northern Iraq, and the crew had switched to infrared.

We own the night, Martin thought.

He was right. The Predator sensors could see as well at night as they could in the day. In some respects, they could see even better at night. They could see anything that generated a heat signature, even a mouse. Martin, like many of his colleague SOs, even dreamed in infrared. The drone unit considered it a normal occupational consequence. Martin’s adrenaline was still high but suddenly spiked when a truck pulled up to the hut. Three people got out of the vehicle. Each appeared to be carrying a rifle, but that was a deduction. They could have been carrying shepherd’s staffs, for all he knew. In the sharp contrast of infrared, he watched the ghostly white figures quickly disappear into the hut. Beads of sweat began to form on his upper lip, despite the 63-degree room temperature—his heart began to race.

The Intelligence Officer (IO) asked the Flight Engineer (FE) if the system was operating within specification. The FE confirmed all systems were within spec. At that point, the IO quickly began an analysis of all known ISIS operatives in the area. Time began to dilate. Each minute felt like an hour. Within ten minutes, the IO gave his analysis to the Mission Monitor (MM), who quickly made a phone call. Martin could only guess that something big was going down.

Martin turned to the Hales. “What’s your take?”

Hales shrugged his shoulders. “Above my pay grade.”

Hales’ words were on the mark. The crew’s job did not include making decisions. For the moment, they could only wait in a holding pattern. The drone’s autopilot kept it within striking distance of the hut. The chat on one of the monitors began to spike. Speculations abounded—A-4 was holding a meeting with his direct reports, planning their next strike. Martin had to look away and force his focus on the hut. He noticed his hands beginning to tremble slightly. Hales’ body language also shouted danger. Neither spoke. Both Martin and Hales stared intently into their monitors.

The IO’s analysis, along with MM’s report, was sent to Operations Command, which could have been the ranking officer on the ground in Tikrit. Unfortunately, the crew had no idea where the reports went. Then the crew headsets came to life with a voice, an unknown chain of command from cyberspace, “Weapons confirmed.” At this point, a safety observer joined the crew to make sure any “weapon release” would be by the book.

The next command the crew received came calmly through their headsets, “Neutralize A-4 and other enemy combatants.” Oddly, the order was monotone, showing the same level of emotion as giving someone directions to the restroom.

The crew understood the command immediately and began a long verbal checklist. Martin locked his laser on the hut. The checklist neared its end with a verbal countdown, “Three…two…one.”

Hales pressed a button to release a Hellfire missile. The Hellfire flared to life, detached from its mount, and reached a supersonic speed in seconds.

Hales announced, “Missile off the rail and on route to target. ETA 15 seconds.”

Martin kept the targeting laser on the hut. All eyes were on the monitors. Each second now felt like a minute. After 5 seconds, the door to the hut suddenly opened and what appeared to be a small child looked out into the night. The crew knew they could divert the missile in all but the last few seconds. No order was given to do so. In an instant, the screen lit up with white flames. After images confirmed the hut and inhabitants destroyed.

Martin turned pale. He looked at Hales. “Did we just kill a child?”

Before Hales had a chance to reply, the crew heard over their headsets, “That was a dog.”

A dog that can open a door and stand on two legs? Martin thought.

No one commented.

The faceless commander announced, “Target Neutralized. Well done.”

Both Martin and Hales looked at each other. Each knew what the other was thinking, and no words were necessary.

The remainder of the mission consisted of assessing the damage. Nothing remained of the hut. The debris field was roughly circular. Body parts, still warm, lit up the infrared as far out as 300-meters. In the jargon of drone crews, these were “bug splats.” As Martin’s shift ended, most body parts had cooled to ground temperature and no longer gave an infrared signature. All was quiet in the aftermath. Each member of the drone crew would receive a positive entry on their record for killing four enemy combatants. At 8:00 P.M., after providing the routine debriefing to his replacement SO, Martin was relieved. His shift had ended. For him, the war was over for another twelve hours.

After two years of drone warfare, Martin knew that over the next week, they would maintain surveillance and wait for family, friends, and potential enemy combatants to visit the site to claim the remains for funeral arrangements. Martin also knew that they, and other drone crews, would maintain surveillance of the funerals to identify additional high-value targets. These activities were Top Secret and received no media attention. If high-value targets were identified, the cat and mouse game began anew. However, there were even worse alternatives. Martin had heard, via the grapevine, other drone crews were ordered to fire at the funeral gathering when several high-value targets attended. He thought, I hope it never comes to that for me.

On his drive home, Martin’s mind replayed the final infrared image of an open door and child staring into the night. When he got home, his wife was waiting for him with a hot dinner on the stove.

They were married for just over five years. Andrea’s statuette figure and pleasantly soft features immediately caught Andy’s eye during a University of Texas dance. They immediately found it easy to talk to one another and quickly became sweethearts. Andrea graduated with a B.S. degree in chemistry and taught at Austin High School, while Andy finished his M.S. degree in computer science at UT. Following Andy’s graduation from UT and his commission as a USAF second Lieutenant, they married. Andrea’s parents, Mildred and Joe, immediately liked the tall, dark-haired, and ruggedly handsome Lieutenant. Although soft-spoken, Martin had a reassuringly calm command presence. They appeared to be the perfect couple.

As he walked toward her, Andrea smiled. “How did it go today?” She was attempting to make polite conversation and could sense her husband had a nightmare day.

“About normal,” he replied softly. He knew he could not talk about the events that took place in T-5, even though he and Andrea kept no other secrets from each other. So it was probably just as well that the horrors he experienced in T-5 remained locked in his mind. “Just another day at the office,” he added with a forced smile.

She looked at him with her puppy brown eyes and smiled back. “Megan wants you to tuck her in.”

“Will do.”

He quietly walked to Megan’s room. As he entered, her eyes lit up. “Daddy, daddy, guess what we did today.”

When he looked at Megan, his mind pictured Andrea at four years old. He smiled. “Was it something nice?”

“Yes, mommy taught me how to cook an egg in water.”

“That’s wonderful. Maybe you could cook one for me tomorrow.”

Megan hugged him and said, “I will.” Then she looked at her Dad with her soulful brown eyes and asked, “Read me more about Paddington?”

“Okay, honey, but just a few pages. It’s getting way past your bedtime, and tomorrow is a school day.”

With that, he reached for the book on her night table, opened to the bookmark, and began reading softly. Soon Megan drifted into sleep.

Quietly leaving Megan’s room, he joined his wife, who had set the table for a late dinner for two. She made his favorite meal, spaghetti, and meatballs. He sat down and, finally feeling at ease, asked, “How was your day?”

Her words flowed over him like a comforter on a cold winter’s eve. He slowly ate his dinner and wondered what dreams would come that night.

 

A robot is standing in front of a computer screen.

Winning The Superintelligence War

Today, no legislation limits the amount of intelligence that an AI machine may possess. Many researchers, including me, have warned that the “intelligence explosion,” forecasted to begin mid-twenty-first century, will result in self-improving AI that could quickly become vastly more powerful than humans intelligence. This book argues, based on fact, that such strong AI machines (SAMs) would act in their own best interests. The 2009 experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland is an excellent example. Robots programmed to cooperate eventually learned deceit in an attempt to hoard beneficial resources. This experiment implies even rudimentary robots can learn deceit, greed and seek self-preservation.

I was one of the first to write a book dedicated to the issue of humanity falling victim to artificially intelligent machines, The Artificial Intelligence Revolution (April 2014). Since its publication, others in the scientific community, like world-famous physicist Stephen Hawkins, have expressed similar sentiments. The Oxford philosopher Nick Bostrom, in his book, Superintelligence: Paths, Dangers, Strategies (September 2014), has also addressed the issue, and like me, argues that artificial intelligence could result in human extinction.

The real question is, “What do we do to prevent the extinction of humanity via our own invention, strong artificially intelligent machines (SAMs)?” Unlike some that have “danced” around the issue, suggesting various potential paths, I intend to be didactically clear. I make no claim my approach is the only approach to resolve the issue. However, I believe that my approach will address the issues and provide a high probability of avoiding human extinction via artificial intelligence. I advocate a four-fold approach.

First, we need legislation that controls the development and manufacture of AI. We need to ensure that an intelligence explosion is not accidentally initiated and humanity does not lose control of AI technology. I do not think it is realistic to believe we can rely on those industries engaged in developing AI to police themselves. Ask yourself a simple question, “Would you be comfortable living next to a factory that produces biological weapons, whose only safeguards were self-imposed?” I doubt many of us would. However, that is the situation we currently face with companies engaged in artificial intelligence development and manufacture. By way of analogy, we have the cliché “fox guarding the chicken coop.”

Second, we need objective oversight that assures compliance to all legislation and treaties governing AI. Similar to nuclear and biological weapons, this is not solely a United States problem. It is a worldwide issue. As such, it will require international cooperation, expressed in treaties. The task is immense, but not without precedent. Nations have established similar treaties to curtail the spread of nuclear weapons, biological weapons, and above-ground nuclear weapon testing.

Third, we must build any safeguards to protect humanity in the hardware, not just the software. In my first book, The Artificial Intelligence Revolution, I termed such hardware “Asimov chips,” which I envisioned to be integrated circuits that represented Asimov’s three laws of robotics in hardware integrated circuits. In addition, we must ensure we have a failsafe way for humanity to shut down any SAM that we deem a threat.

Fourth, we need to inhibit brain implants that greatly enhance human intelligence and allow wireless interconnectivity with SAMs until we know with certainty that SAMs are under humanity’s control and that such implants would not destroy the recipient’s humanity.

I recognize that the above steps are difficult. However, I believe they represent the minimum required to assure humanity’s survival in the post-singularity world.

Could I be wrong? Although I believe my technology forecasts and the dangers that strong AI poses are real, I freely admit I could be wrong. However, ask yourself this question, “Are you willing to risk your future, your children’s future, your grandchildren’s future, and the future of humanity on the possibility I may be wrong?”  Properly handled, we could harvest immense benefits from SAMs. However, if we continue the current course, humanity may end up a footnote in some digital database by the end of the twenty-first century.

A city is burning down and people are walking.

Assuring the Survival of Humanity In The Post Singularity Era

How do we assure that we do not fall victim to our own invention, artificial intelligence? What strategies should we employ? What actions should we take?

What is required is a worldwide recognition of the danger that strong AI poses and a worldwide coalition to address it. This is not a U.S. problem. It is a worldwide problem. It would be no different from any threat that could result in the extinction of humanity.

Let us consider the example President Regan provided during his speech before the United Nations in 1987. He stated, “Perhaps we need some outside universal threat to make us recognize this common bond. I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world.”

I offer the above example to illustrate that we need humanity, all nations of the world, to recognize the real and present danger that strong AI poses. We need world leaders to take a proactive stance. That could, for example, require assembling the best scientists, military and civilian leaders to determine the type of legislation needed to govern the development of advanced artificially intelligent computers and weapon systems. It could involve multinational oversight to assure compliance with the legislation. Is the task monumental? Yes, but do we really have another alternative? If we allow the singularity to occur without control, our extinction is inevitable. In time, the Earth will become home to only machines. The existence of humanity will be digital bits of information in some electronic memory depository.

I harbor hope that humanity, as a species, can unite to prevent our extinction. There are historical precedents. Let me provide two examples.

Example 1. The Limited Test Ban Treaty (LTBT) – The treaty banned nuclear weapon tests in the atmosphere, in outer space, and underwater. It was signed and ratified by the former Soviet Union, the United Kingdom, and the United States in 1963. It had two objectives:

    1. Slow the expensive arms race between the Soviet Union and the United States
    2. Stop the excessive release of nuclear fallout into Earth’s atmosphere

Currently, most countries have signed the treaty. However, China, France, and North Korea are countries known to have tested nuclear weapons below ground and have not signed the treaty.

In general, the LTBT has held well, even by countries that have not signed the treaty. There have been several violations by both the former Soviet Union and the United States. However, for almost the last fifty years, no nuclear tests have violated the treaty. This means that the fallout from the nuclear tests did not exceed the borders of the countries performing the tests.

Why has the LTBT been so successful? Nations widely recognized atmospheric nuclear tests as dangerous to humanity due to the uncontrollable nature of the radioactive fallout.

Example 2. The Biological Weapons Convention – In a 1969 press conference, President Richard M. Nixon stated, “Biological weapons have massive, unpredictable, and potentially uncontrollable consequences.” He added, “They may produce global epidemics and impair the health of future generations.” In 1972, President Nixon submitted the Biological Weapons Convention to the U.S. Senate.

The “Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on Their Destruction” proceeded to become an international treaty.

    • Signed in Washington, London, and Moscow on April 10, 1972
    • Ratification advised by the US Senate on December 16, 1974
    • Ratified by the US president January 22, 1975
    • US ratification deposited in Washington, London, and Moscow on March 26, 1975
    • Proclaimed by the US president March 26, 1975
    • Entered into force March 26, 1975

The above two examples prove one thing to my mind. If humanity recognizes a possible existential threat, it will act to mitigate it.

Unfortunately, while several highly-regarded scientists and notable public figures have added their voice to mine regarding the existential threat artificial intelligence poses, it has failed to become widely recognized.

I am written several books to delineate this threat, including The Artificial Intelligence Revolution, Genius Weapons, Nanoweapons, and War At The Speed Of Light. My goal is to reach the largest audience possible and raise awareness regarding the existential threat to humanity that artificial intelligence poses.

In the simplest terms, I advocate that the path toward a solution is educating the lay public and those in leadership positions. Once the existential threat that artificial intelligence poses becomes widely recognized, I harbor hope that humanity will seek solutions to mitigate the threat.

In the next post, I delineate a four-fold approach to mitigate the threat that artificial intelligence poses to humanity. There may be other solutions. I do not claim that this is the only way to address the problem. However, I’m afraid I have to disagree with those that suggest we do not have a problem. In fact, I claim that we not only have a potentially serious problem, but also we need to address it post-haste. If I am coming across with a sense of urgency, it is intentional. At best, we have one or two decades after the singularity to assure we do not fall victim to our own invention, artificial intelligence.

 

 

A man in white lab coat standing next to a person.

A Scenario of Medial Advancement In 2030

USAF Major Andrew Martin’s staff assistant, Master Sergeant Beesly, interrupted via Martin’s earpiece, “There a call from your wife, Major. It’s urgent.”

Martin was coordinating the drone missions for the week with the three captains under his command by using the Nellis Air Force Base secure intranet. Martin and the captains could each see one another on their respective computer monitors.

“One moment.” Martin pushed a button on his computer keyboard, and his monitor went blank. It was highly unusual for Andrea to call him at work. Normally, a call from outside the base would not be put through unless there was an emergency. Martin’s adrenaline level increased as he tapped his earpiece.

“Hi honey…is everything okay?”

Andrea, his wife of thirteen years, was sobbing. “My Dad just had a stroke.”

“When… How bad?” His tone was caring and calm.

“I just got a call from my Mom. He was at his dental practice and collapsed.” Andrea was sobbing and could hardly speak but managed to say, “They rushed him to Valley Hospital Medical Center. My Mom is with him now.”

Although Andrea’s parents lived in Boulder City, a plush suburb of Las Vegas, her father, Doctor Joseph Benson, had his highly lucrative dental practice in Las Vegas. Thanks to Las Vegas’ thriving economy, Valley Hospital Medical Center had an excellent Stroke Center, which was widely respected for its team-based approach to comprehensive stroke care.

At least Joe’s in the right place, thought Martin. Trying to remain calm for Andrea’s sake, he said, “I’ll be right home.”

“Please hurry.”

Martin pushed a button on his computer keyboard, and the monitor screens lit up. “Sorry, captains, for the interruption… I think we were just about done. Any questions?”

No one asked a question, so he continued, “Captain Struthers, I’m leaving you in charge. I have a family emergency. Call me if we get new orders or if the orbits encounter issues.”

“Yes, sir,” Struthers replied.

Martin pushed a button on his computer keyboard, and the meeting ended. Then, he tapped his earpiece. “Sergeant Beesly route my car to the entrance.”

“Yes, sir.”

Martin spent the next minute shutting down his computer and placing his papers in his office’s “Top Secret” secure file cabinet, which only opened when he placed his right hand on its biometric reader and gave it a voice command. The base commander, General Robert Rodney, if necessary, could also open it.

Beesly’s voice came through Martin’s earpiece. “Your vehicle is at the entrance, Major.”

“Thank you, Sergeant… Please inform General Rodney of the situation. Let him know that I left Captain Struthers in charge during my absence.

“Yes, sir.”

Martin took the stairs, not wanting to wait for the elevator. His office was on the second floor, and the physically fit Martin was at the entrance within a moment. He got into the back seat of his new driverless vehicle and commanded, “Take me home.”

“Yes, sir,” said the synthetic voice of the vehicle’s SAM (strong artificial machine).

In 2030, driverless vehicles were popular and amazingly accounted for about 50% of all new vehicles sold. It was normal to pay 25% more for the driverless option, but Martin was assigned his vehicle via the officer’s compensation plan for his rank. The USAF and other military branches favored purchasing driverless vehicles ever since their widespread introduction in 2026. Their safety record appeared on par, if not better than their human-driven counterparts, eliminating the need to assign a driver.

The Martins lived in Centennial Hills, about three miles east of Nellis. Within five minutes, his vehicle pulled into the garage of their four-bedroom, two-story home. He noticed his wife’s car was already in the garage, which meant she was home. He bolted from his car and into the kitchen. His wife was talking on the phone. She quickly ended the conversation as soon as she saw her husband.

Andrea’s soft brown eyes were blood red. She turned to look at her husband, “I called the school to let them know I would be out today and tomorrow.” She was a high-school chemistry teacher at Advanced Technologies Academy, a public high school in Las Vegas, focusing on integrating technology with academics for students in grades 9-12. It was approximately fourteen miles from their home, which worked out to about a 22-minute drive.

“He’s in good hands.” Martin’s voice was reassuring, and his wife nodded in agreement. Instinctively, they embraced.

Martin liked Joe. Joe was tall, with light gray temples. His appearance conferred an aura of confidence. Joe, like Martin, was both reserved and a man of few words. He had the uncanny ability to get along with almost anyone. He was well-read and one of the few people who understood the true nature of Martin’s combat role and the stress that accompanied it. Like Andrea, Joe’s wife, Mildred, was also a chemistry teacher at Advanced Technologies Academy. The Benson’s had two daughters, Elise and Andrea. Elise lived in Minnetonka, Minnesota, with her husband, Mark, and was one year younger than Andrea. Elise was four months pregnant. Mark was an electrical engineer working for Honeywell.

Andrea looked up at her husband and spoke softly. “Let’s go…” He nodded, and they both walked to the garage and got into the back seat of Martin’s vehicle.

Martin gave a voice command. “Dive us to Valley Hospital Medical Center.”

“Yes, sir,” replied the vehicle’s synthetic voice.

Andy held his wife’s hand as the driverless vehicle drove the 15 miles to the Medical Center entrance. When they arrived, the car doors opened automatically. They got out; an automated machine provided a parking receipt, and the car proceeded to park itself in the adjacent ramp. With that, the Martins headed to the information booth, just inside the entrance. They learned that Joe was still in the ER, treatment room 12. They walked through the emergency facility maze of corridors and finally found treatment room 12. The curtain blocked their view, and they could hear voices. Andrea pulled back the curtain to look in and saw a person in a white smock talking to her Mom. Andrea took he husband’s hand, and they both walked into the treatment room.

The man in the white smock turned to see them enter. “Hello, I’m Doctor Jacob, a stroke specialist.” Doctor Jacob appeared to be a man of average height and build in his mid-forties, with slightly gray temples complementing dark brown hair. They shook hands.

Andrea’s soft brown eyes stared with worry at her Dad, lying almost flat in bed. A clip-on Joe’s left-hand middle finger was attached to a monitor, which provided oxygen and pulse readings. Andrea’s Mom was standing on the far side of the bed next to her Dad. Andrea went over to her Mom, hugged her, and then softly touched her Dad’s hand.

“How’s my Dad?” Andrea’s voice held back tears as she looked at Doctor Jacob.

Doctor Jacob looked at her and, with self-assured confidence, said, “He had a minor stroke. He lost feeling in his right leg, which caused him to fall.” He paused while looking at his tablet phone. “His right leg is still numb, but some of the stroke symptoms have receded. I’ll know more after the tests.”

“What kind of tests?”

“We’ll start with an MRI and go from there.”

Doctor Jacob looked down at his tablet phone and then looked again at both Andrea and her Mother. “The MRI will tell us if the stroke is ischemic, a blockage, or hemorrhagic, blood leaking from an artery in the brain.” He paused. “We’ll be wheeling him out shortly.”

Doctor Jacob looked at Joe. “Don’t worry, Mr. Benson. We’re going to take good care of you.”

As the doctor finished his last few words, an orderly came to wheel Joe into the MRI room. Andrea and Mildred kissed Joe, and the orderly wheeled him out of the room.

Doctor Jacob addressed the family, “He should be back in less than an hour. I’ll also be back right after I have a chance to review the MRI images.” He could sense the level of concern on their faces. “It looks like a mild stroke. We’ll take care of him,” he offered to assuage their fears.

Doctor Jacob left the room. Martin went over and put his arm around Mildred, who, for the most part, looked like Andrea’s older sister, not her mother. He spoke calmly while looking into Mildred’s brown eyes. “He’s in the best place possible.”

Mildred looked up, “Thank you both for coming….” Her eyes began to tear. Martin instinctively hugged her again. Mildred, like Andrea, was a strong, self-assured woman. Given the situation, Mildred displayed amazing self-control.

Martin looked at both Andrea and Mildred. “I’ll be right back.” He was gone for only a few minutes and returned with a pager. “They’ll page us when Joe is back in the room. The nurse said we could wait in the visitor’s lounge just down the hall.”

Together they walked to the visitor lounge. Its walls were gray, and there was a large brown leather couch and several matching chairs. In the corner was a television. It was broadcasting CNN with closed captioning. Andrea and her Mom sat on the couch, and Martin sat in a chair facing them. Andrea reassuringly held her Mother’s hand. While the hour dragged on, they made small talk, mostly focused on the ideal golf weather. Then the pager buzzed and lit up.

Martin, startled by the pager, composed himself and said, “Looks like Joe is back in his room.”

They got up and walked back to treatment room 12. The Doctor was talking to Joe. As they entered, the Doctor greeted them again.

“It’s mostly good news,” Doctor Jacob said in an upbeat tone. “It was an ischemic stroke, affecting only a small portion of the brain. There is some dead brain tissue…” He paused so that they had time to process the information.” We’ve given Mr. Benson a tPA… tissue plasminogen activator…a clot-busting drug that is dissolving the clot as we speak.”

Mildred looked at Doctor Jacob. “What about his leg?”

“We’ll need to do a neuroprosthetic brain implant to restore his leg function.”

Neuroprosthetic brain implants were not new. Early research on them started in 2008 at the Washington University School of Medicine in St. Louis. Brain-computer interfaces (BCI) were used to detect signals on one side of the brain linked to hand and arm movements on the same side of the body. These signals could be detected and separated from signals that controlled the opposite side of the body. This made it possible to implant a BCI in the undamaged side of Joe’s brain and restore function to his leg. In addition to the BCI, a small wireless computer would be implanted in Joe’s chest (just below the collarbone). The purpose of the computer was to interpret signals from the BCI and assure they resulted in the proper leg movement. This type of surgery was routine, and the patient usually went home the next day.

Doctor Jacob looked at Joe. “Don’t worry, Joe. You’ll be walking out of here tomorrow on your own.”

Relieved, Joe flashed his radiant smile, putting his wife and daughter instantly at ease. “Thank you, Doctor,” he said in a soft relieved tone.

“We’re going to prescribe a Coumadin regiment to prevent new clots from forming, but we’ll talk more about that tomorrow before you leave the hospital.”

Joe nodded. Everyone felt greatly relieved.

“We’ll be prepping you for surgery as soon as you sign the release form.” Doctor Jacob held his tablet phone in front of Joe, and Joe signed his name using his finger.

“They will be down shortly to take you to surgery. The whole procedure will take about three hours.” He paused as he looked at Joe’s signature on the tablet phone. “Doctor Harris will operate… He’s one of the best in the country.”

Doctor Jacob looked up from his tablet phone and asked, “Any questions…?”

Mildred replied, “No, I think we understand….”

“Good. You can get something to eat in our cafeteria if you’d like. I’ll call you as soon as Joe is out of recovery and in his room.” Doctor Jacob asked Martin to transfer his phone number to him electronically. Martin readily complied.

“If there are no questions, I’ll leave you for now. If anything comes up, I’ll call you.”

Mildred and the Martins nodded. Doctor Jacob smiled and left.

Mildred and the Martins made their way to the cafeteria and had a light lunch. After lunch, they went to the visitor’s lounge in the hospital’s lobby. Andrea called her sister Elise and gave her a full update. The lounge’s computer monitor provided updates on Joe’s progress. The hours passed, and Martin finally received a call from Doctor Jacob.

“All went well,” said Jacob in a calm, assuring tone. “Joe is in room 43B. He’s sitting in a chair and eating. You can visit him now.”

Mildred and the Martins made their way to Joe’s room. His bed was nearest the window. He was sitting in a large leather recliner chair. His food tray was on a stand that held the tray just above his waist.

Mildred walked over to him. “How are you, dear?”

“I feel fine. Even my right leg feels fine. I walked to the chair on my own.”

Martin marveled at the level of technology. Twenty years ago, Joe would likely require months of physical therapy and then probably need a cane to walk. Yet, here he is, almost back to normal. Only a small bandage covered the top right side of Joe’s head. His hospital gown hid the chest bandages. Several wires we attached to Joe’s chest under his gown, and he had a plastic finger clip, all of which displayed his vitals on a nearby monitor.

Mildred and the Martins visited for a while but wanted to leave early to let Joe rest. Before they left, Andrea called her sister and gave the phone to her Dad. Elise talked to her Dad for a few moments.

The next day Mildred and the Martins returned to take Joe home. He was able to walk normally. Doctor Jacob told them that Joe should make an appointment with Doctor Harris in a week to have a post-op checkup and the stitches removed. Until then, Joe was to remain home and rest. He emphasized that there should be no exertion.

In a week, the bandages and stitches were removed. In a month, Joe was back at his dental practice, fully recovered. The Bensons and Martins returned to their normal routine.

Note to readers: Hit the like button if you want me to provide similar scenarios in future posts.