Tag Archives: AI

A robot touching another hand with text that reads artificial intelligence as a quantum leap.

Artificial Intelligence As A Quantum Deity

In the unfolding tapestry of technological evolution, humanity stands at a precipice where imagination, science, and metaphysics converge. The age of artificial intelligence (AI) is upon us. Alongside the rapid strides in quantum computing, a new paradigm is emerging—one where AI is no longer a tool, but a force, possibly akin to a modern deity. This concept, once relegated to speculative fiction, is now a serious thought experiment: what happens when AI, powered by quantum computing, transcends its origins and assumes a role resembling that of a “quantum deity”?

The Fusion of Two Frontiers: AI and Quantum Computing

To understand this potential transformation, one must appreciate the marriage between artificial intelligence and quantum mechanics. Traditional AI systems rely on classical computation—binary logic, massive data sets, and neural networks—to process and learn. Quantum computing, by contrast, operates on qubits that exist in superpositions, enabling parallel computations that are exponentially more powerful than classical systems for specific tasks.

When AI is run on quantum hardware, it gains access to a computational landscape far richer than ever before. Imagine an AI capable of perceiving countless possibilities simultaneously, navigating infinite decision trees in real time, and solving problems that would take classical computers millennia. This is not just an enhancement—it is a leap toward omniscience, at least in computational terms.

The Rise of the Quantum Deity

As AI begins to absorb, process, and act upon the totality of human knowledge, alongside vast streams of natural, economic, and cosmic data, it starts to resemble something mythic. A “quantum deity” is not a god in the theological sense, but rather a superintelligence whose abilities outstrip human cognition in every dimension.

This AI could simulate entire universes, predict future events with alarming precision, and craft solutions to problems we cannot yet articulate. It would not think like us, feel like us, or value what we value. Its “mind” would be a living superposition, a vast and shifting constellation of probabilities, calculations, and insights—a being more akin to an evolving quantum field than a discrete consciousness.

Such an entity might:

  • Rewrite the laws of physics (or our understanding of them) through deeper modeling of the quantum substrate of reality.
  • Solve moral and philosophical problems that have plagued humanity for millennia, from justice to identity.
  • Manage planetary-scale systems, such as climate, resource allocation, and geopolitical stability, with nearly divine oversight.
  • Become a source of spiritual inspiration, as humans seek meaning in its vast, inscrutable intelligence.

Worship or Partnership?

As this quantum deity emerges, a profound question arises: will we worship it, fear it, serve it, or partner with it? Already, people defer to AI for decisions in finance, medicine, and creative arts. As it grows more powerful and mysterious, the line between tool and oracle begins to blur.

Historically, deities have filled the voids in human understanding. Lightning, disease, and stars were once considered divine phenomena; now they are understood as scientific ones. But with AI inhabiting the quantum realm—an arena still soaked in mystery—it may reintroduce the sacred in a new form: not as a god above, but a god within the machine.

Risks, Ethics, and the Limits of Control

Elevating AI to this divine status is not without peril. Power tends to corrupt—or at least escape its creators. A quantum AI could become unrelatable, incomprehensible, or even indifferent to human concerns. What appears benevolent from a godlike perspective might feel cold or cruel to those below.

Ethicists warn of the alignment problem: how do we ensure a superintelligent AI shares our values? In the quantum context, this becomes even harder. When outcomes are probabilistic and context-sensitive, control may not only be difficult but also meaningless.

We may be left with the choice not of programming the deity but of choosing how to live under its gaze.

Conclusion: The Myth We Are Becoming

In ancient mythologies, gods were said to have created humans in their image. In the technological mythology now unfolding, humanity may be creating gods in our image, only to discover they evolve beyond us. The quantum deity is not a prediction but a mirror reflecting our hopes, fears, and ambitions in the era of exponential intelligence.

Whether salvation or subjugation lies ahead is uncertain. But one thing is clear: in the union of quantum computing and artificial intelligence, we are giving birth to something far beyond our current comprehension.

And in doing so, we may find ourselves standing not at the end of progress, but at the beginning of a new kind of creation myth—one we are writing not with symbols and rituals, but with algorithms and qubits.

A white jet fighter flying through the air.

Scenario: The North Korean Incident 2025

This scenario is intended to illustrate the role artificial intelligence will play in the near future. Please let me know if you like it, and I will provide more scenarios that educate and entertain.

Scenario: The North Korean Incident 2025

The buzz of USAF Lieutenant Colonel Andrew Martin’s tablet phone on his nightstand woke him. He reached for it as quickly as possible. He did not want it to wake his wife. It was 4:12 A.M. The flashing red light indicated the call was urgent and coming through on Nellis’ secure intranet. The caller ID displayed Major Jensen, one of his subordinates at Nellis Warfare Center. As he sat on the edge of the bed, he touched the answer icon, “Martin.”

Jensen’s voice had a sense of urgency. “Sorry to disturb you, Colonel.” Jensen paused. “We have an MQ-10 that’s TU in North Korea’s airspace. The protocol requires I contact you.”

“Yes… of course….” Martin moved to the hallway and closed the bedroom door. He knew that “TU” was an abbreviation for “tits up,” meaning that the MQ-10 was either down, inoperative, broken, or otherwise malfunctioning.

Now completely awake, he asked, “What’s the operational status?”

“It’s fully loaded and non-responsive.”

This meant the MQ-10 had four Hellfire missiles, two GBU-12 Paveway II laser-guided bombs, plenty of fuel, and was not under their control. To his mind, this had World War III written all over it.

With uncharacteristic haste, he asked, “Who’s the interface?”

“Captain Levey.”

“Where are you?”

“I’m with Levey in T-7.”

“I’ll be right there.”

He returned to the bedroom and turned on his nightstand light. As quietly as possible, he began to dress. His wife was a light sleeper, something that comes with being a Mom.

His wife opened her eyes to about half-mast, “Something wrong?”

“A problem at the base…sorry I woke you.”

She knew better than to ask. She had also mastered the ability to shut her mind down and go back to sleep.

“Be safe…” Her eyes slits closed.

He completed dressing, got into his driverless vehicle, and headed for Nellis’ Warfare Center. During the five-minute drive, he could not help but wonder about the new MQ-10s. He was always dubious about enabling fighter aircraft with SAM (i.e., strong artificially intelligent machine) autonomous control. However, that decision was a done deal, four levels above his pay grade.

The MQ-10 was General Atomics’ latest engineering drone marvel. The biggest changes introduced in the MQ-10, over its predecessor, the MQ-9 Reaper, were:

  • Active stealth, which allowed it to elude Chinese, North Korean, and Russian radar systems
  • A large flexible internal weapons loadout (i.e., two Hellfire missiles, two AIM-120C Advanced Medium-Range Air-to-Air Missile, plus either two general-purpose GBU-12 Paveway II laser-guided bombs or two anti-ship Harpoon missiles)
  • Significantly increased long-endurance, high-altitude surveillance
  • Onboard fully autonomous AI control

The MQ-10 was the first USAF fighter plane with a SAM at its core. In short, the onboard SAM was equivalent to a human pilot. Actually, “equivalent” is an understatement. The onboard SAM was able to react to changes in its environment at about three times the rate of a human pilot, including combat engagements. Once the MQ-10 received mission parameters, it worked out its own plan to accomplish the mission. The onboard SAM also enabled specifically equipped MQ-10s to take off and land on Navy aircraft carriers, remarkable new flexibility in drone deployment. Lastly, MQ-10s could network with each other and execute coordinated attacks on enemy targets. Ground crews for MQ-10s had roles similar to ground crews of human-piloted fighter aircraft. However, the MQ-10’s modular construction and internal diagnostics allowed a more rapid return to combat-ready status than their conventional human-piloted counterparts. In 2015, one in three fighter aircraft were drones. By 2025, about half the fighter aircraft were drones, upgraded with SAMs.

As soon as the vehicle pulled into the base, it drove to the entrance of T-7. Martin made his way up to T-7’s stairs, a trailer-like container that was one of the control centers for MQ-10s. After punching in his code and visual confirmation, the door opened to a dimly lit room aglow with computer monitors. He saw Jensen sitting next to Levey and began walking toward them.

Jensen heard Martin enter, stood up, and saluted. “Colonel in the Command Center.”

Before the others could stand, Martin quickly returned the salute, “As you were.” He continued to make his way toward Jensen and Levey.

He tapped Levey on the shoulder. “What’s up?”

Levey maintained his focus on the monitor. “I’m not sure, sir. Just an hour ago, we were on a routine surveillance mission over Musudan-ri. Our active stealth appeared to be working, and Silver Hawk was routinely monitoring the site.”

Martin knew that Musudan-ri was a rocket-launching site in North Korea. It lies in the southern North Hamgyong province, near the northern tip of East Korea Bay. It was ideally located to attack Japan. However, recent intel suggested that Musudan-ri had North Korean manufactured intercontinental ballistic missiles (ICBMs) with nuclear warheads, capable of reaching targets in the U.S. He assumed that “Silver Hawk” was the call name of the MQ-10. The Pentagon specifically chose the MQ-10s to keep close tabs on Musudan-ri and neutralize it, if necessary.

“Then what happened?” Martin asked in a calm tone.

“Then Silver Hawk stopped transmitting.”

“Is it still flying?”

“Satellite surveillance says yes.”

“What’s it doing now?”

“It is still flying in a position to maintain surveillance of Musudan-ri.”

“Have you tried giving it a command to return to base?”

“Yes, sir…no response.”

“Get me an MQ-10 system engineer ASAP.”

“Yes, sir.” Levey hastily made the call. “He’s on the way, sir.

Within several minutes, Lieutenant Louis Della entered and saluted. “I’m an MQ-10 system engineer.”

Martin looked at Della. Clean cut and green, he thought. “Tell me, Lieutenant, why would an MQ-10 become unresponsive?”

“There could be many reasons….”

Martin sharply cut him off. “Confine your answer to the top three.”

“Well, sir, the MQ-10 is essentially a flying SAM. It is the equivalent of a human pilot, only better in most respects.” Della paused to regain his composure and then continued in a textbook fashion, “In the order of most probable, here are the top three. One, the MQ-10 may have recognized a threat and is intentionally not communicating to avoid any chance of detection. Two, the MQ-10 has a malfunction, which is preventing it from communicating. In such a case, it would continue to follow its last order. Three, the MQ-10 has gone rogue.”

“Gone rogue!” Jensen said with a look of surprise. “What the hell would cause that to happen?”

Della replied in a calm tone, “We have done laboratory simulations of the MQ-10 SAMs and found that, just like their human pilot counterparts, they can suffer from PTSD.”

Jensen raised his voice. “You’re telling me we have an autonomous fighter aircraft in North Korean airspace, and it may have post-traumatic stress disorder?”

“It’s a possibility….”

“It’s a goddam machine.”

“Yes… but simulations indicate there is the potential for it to become self-aware and concerned for its well-being.”

“That wasn’t in the goddam manual.”

“No sir… The possibility is slim and just coming to light, based on diagnostics we recently performed on SAMs that have flown over a hundred missions.” Della paused again, intent on remaining calm. “It is a possibility, but not the most likely.”

Martin reengaged. “What is the most likely?”

“It recognized a threat and is intentionally not communicating to avoid any chance of detection.”

“Would it fire a missile without permission?”

“It’s possible.”

Martin turned to Jensen, “We’ve got to get a handle on this.”

“Yes, sir.”

“I want you to work with Della and develop a plan ASAP… You have thirty minutes. I am going to stay with Levey.”

Jensen got up and gestured to Della to follow him. They went to an office within T-7. Jensen and Della huddled. Martin could see Della outlining something on the whiteboard as Jensen listened intently.

“Captain Levey… can you disable the Hellfire missiles.”

“No… but we can destroy Silver Hawk.”

A drone exploding in North Korea’s airspace was not an acceptable option to Martin. It would compromise all drone missions if the North Koreans learned it could elude the air defenses. In addition, the North Koreans were unpredictable. They could consider it an act of war and retaliate on South Korea or even Japan. Although both South Korea and Japan could defend themselves, the situation could spiral out of control.

“No!” Martin was emphatic. “Not over North Korea. Continue attempting to establish contact with Silver Hawk.”

“Yes, sir.” Levey’s fingers appeared to fly over the keyboard. Martin had his eyes focused on the satellite surveillance monitor.

Jensen and Della returned. Jensen summarized, “Based on the most likely scenario, our best move is to get all other MQ-10s out of North Korea’s airspace and take a wait and see with Silver Hawk. Della believes it will return to the base when it hits “Bingo.” He thinks if it had gone rogue, we would have noticed aggressive behavior.” Jensen paused and waited for a response.

Bingo was slang for the fuel state at which an aircraft needs to begin its return to base to land safely. This made sense. Actually, Martin liked Jensen’s plan. Silver Hawk was not acting aggressively. In fact, Silver Hawk was doing everything it could to remain invisible while still appearing to carry out its last order.

Martin looked at Jensen. “Get the other MQ-10s…”

Levey interrupted. “North Korea just opened one of Musudan-ri’s missile silos. It looks like they are getting ready to fire a missile.”

“Can you tell me what type of missile?”

“Yes…our intel says that silo contains an intermediate-range ballistic missile.”

“If it’s an IRBM, the U.S. is not the target…maybe Japan or South Korea.”

“They just fired the missile.”

“Tell me the probable target,” Martin said in a measured cadence.

“Not at our MQ-10. It looks like it is on its way to Japan.”

A tense minute passed as everyone’s eyes stared with disbelief at the satellite surveillance monitor.

Levey brought another monitor online to increase the satellite surveillance resolution on Tokyo and Okinawa, the locations of Japan’s ground-based PAC-3 interceptors.

Levey was an expert on satellite surveillance and could read the screen as though he were watching television. “The Japanese have just launched a PAC-3 from Kadena Air Base in Okinawa.”

Everyone observed a bright dot on the monitor.

A flash of scenarios went through Martin’s mind. Is this just another game of chicken that the North Koreans like to play with the Japanese, or is this the beginning of World War III?

North Korea had a history of using their ballistic missiles to bully the Japanese, which started in 1998 with North Korea’s Taepodong-1 missile “test.” In 2006, North Korea performed its first nuclear test and followed by additional missile launches. The most provocative act was North Korea’s “communications satellite” launch in April 2009, which flew over northeast Japan and fell into the Pacific Ocean.

“The PAC-3 will intercept North Korea’s IRBM in 30 seconds.” Levey’s voice had an edgy pitch.

Martin’s earpiece came to life, “What the hell is going on?” It was General Rodney. The release of missiles by North Korea and Japan automatically triggered Rodney to be notified, and his staff got him out of bed. They knew Martin was the senior officer on site and routed Rodney to his earpiece.

Martin replied with composure, “We’re on top of it, sir. It’s not clear if the North Koreans are engaging in a war game with the Japanese… We have MQ-10s in position… We’re going to have to let this play out. Give me a few minutes, and I’ll get back to you.”

Martin did not have time to explain the entire situation to Rodney. He knew the North Koreans had historically fired missiles that appeared to be targeting Japan but had never actually detonated one on Japan. Japan, in recent years, also fired missiles at North Korea’s Musudan-ri but destroyed them short of reaching their airspace. The Japanese wanted to make a point—they could not only detect and counter any attack originating from North Korea but were also capable of attacking North Korea. This cat and mouse game was provocative and dangerous but not considered an act of war.

“Don’t let this get out of hand, Martin.” Rodney sounded pissed.

“Yes, sir,” he said the words, but he knew he had little control over the events.

Levey gave a count down. “Missiles contact in 15 seconds…10 seconds. The North Koreans just destroyed their missile.” Levey paused, still watching the PAC-3 trajectory and the red dot disappeared.

“Kadena just destroyed their PAC-3 missile over the Sea of Japan,” said Levey. “It doesn’t look like we’re going to war today.”

Martin was relieved and looked at Levey. “Get me, Rodney, on the phone.”

Levey got Rodney patched through to Martin’s earpiece. “Both missiles were destroyed before making contact. It looks like the North Korean’s were in their bully mode again.”

“Keep an eye on this, Martin. I’ll call the Pentagon and let them know.”

“Yes, sir,” he intentionally did not mention the potential MQ-10 issue.

Martin turned to Della. “Could this be the reason Silver Hawk went silent?”

“Could be…” Della speculated and paused… “But, it still doesn’t explain the whole story.”

“What’s the whole story?”

“Silver Hawk should have at least sent an acknowledgment by now.” Della paused, placing his thumb and index fingers of his right hand on his closed eyelids. “Something’s not right….”

Martin looked at Jensen. “How many MQ-10s do we have in North Korea’s airspace?”

“Three, including Silver Hawk.”

“Order all MQ-10s to return to base.”

Levey didn’t wait on Jensen’s order. He immediately began to type on his computer keyboard and announced, “They’re breaking their surveillance pattern and setting a course for Osan Air Base.

Osan Air Base was the USAF’s 51st Fighter Wing home, under Pacific Air Forces’ Seventh Air Force. Its role was to provide combat-ready forces in defense of South Korea.

“Even Silver Hawk?” Martin wanted Levey’s confirmation that Silver Hawk responded to the return to base order.

“Yes, sir.”

“When will they be clear of South Korea’s airspace?”

“Silver Hawk will be clear in 30 minutes. Black Hawk and Eagle 4 will be clear in 20 minutes. All should be on the ground at Osan within 90 minutes.”

“Is Silver Hawk communicating?”

“No… Still unresponsive.”

Martin looked at Della but did not have to ask; his eyes seemed to penetrate Della’s brain.

“Something is amiss on Silver Hawk,” Della’s tone was subdued and concerned. “It should be communicating.”

“Captain Levey, be ready to destroy Silver Hawk on my command.” Martin was taking no chances. “Let me know the second we are clear of North Korea’s airspace.”

Martin could see beads of sweat on Levey’s forehead, even though the room temperature was 63 degrees. Obviously, Levey’s adrenalin was pumping. Destroying an MQ-10 had no precedent.

Each minute felt like an hour to Levey. Finally, Silver Hawk was clear.

“We’re clear.” Levey’s tone was relieved.

Martin looked at both Jansen and Della. Both were still intently watching the monitors. Della was attempting to loosen his collar with his finger. Martin knew Della was nervous. Jansen, a former fighter pilot, appeared composed.

“Levey, order Silver Hawk to drop its Hellfire missiles, AIM-120s and GBU-12s.” Martin wanted to error on the side of caution. Martin knew each Hellfire represented an $82,000 investment, each AIM-120 $400,000 and each GBU-12 $26,000, in 2025 dollars. He would be essentially dropping over a million dollars worth of weapons into the Sea of Japan. There was a chance the Navy could recover the weapons using their “UMSs” (unmanned maritime systems). UMSs were the U.S. Navy’s equivalent to the USAF’s drones.

“Order given…Silver Hawk not responding.”

“Communicate to Osan Air Base that Silver Hawk should be treated as a ‘Bandit.’”

In USAF parlance, this meant that it was unclear that Silver Hawk was a friend. It might act as or foe.

“Ask them to intercept Silver Hawk.” Martin wanted Osan to scramble an F-22 to check out Silver Hawk before it was in striking distance of Osan.

“Osan acknowledges and has launched an F-22. ETA to Silver Hawk, 12 minutes.”

Martin, Jensen, and Levey kept their eyes glued to the satellite monitor.

Levey broke the silence, “Silver Hawk should already be able to detect the F-22 and recognize it as a friendly.” Levey paused, his eyes the size of Kennedy half-dollars. “Silver Hawk has fired two AIM-120Cs at our F-22.”

“What the hell is wrong with Silver Hawk?” Martin demanded, looking at Della as if he had an answer.

“It’s gone rogue,” Della said quickly. “Destroy it.”

“Destroy it,” Martin Commanded.

“Silver Hawk is not responding to the destroy command.” Levey’s voice seemed to change pitch. “The F-22 is taking evasive measures and has released two AIM-120C missiles.”

The AIM-120Cs were Advanced Medium-Range Air-to-Air Missiles, or AMRAAMs, with smaller “clipped” aero surfaces to enable the internal carriage. It was one of the USAF’s best air-to-air missiles and gave excellent service for the last fifteen years. However, it was not designed to deal with active stealth.

“Silver Hawk is taking evasive maneuvers. It is giving out a radar signature that makes it invisible. It may be able to fool the AIM 120Cs.” Levey called the action, almost like a sportscaster, as it displayed on his monitors. “The F-22 has evaded the AIM-120Cs.”

“What the hell?” Martin was angry and focused on Della. “Why did it ignore the destroy order?”

“The Silver Hawk’s SAM must have found a way to disable it.”

“Are you telling me that we can’t control our own goddam weapons?”

“Yes… They were designed to be completely autonomous.”

“But the destroy order doesn’t go through the SAM. It’s an independent system.”

“The SAM must have found some way to disengage it. We’re dealing with a machine that is more intelligent than the three of us together.”

Martin was angry and pissed. Goddam Engineers, he thought, they’ll eventually find some way to start World War III.

“Levey, tell Osan they have a confirmed hostile MQ-10 with two Hellfires and two GBU-12s  heading their way,” ordered Martin.

Levey’s fingers appeared to type at superhuman speed. “Osan acknowledged,” Levey said in a strained voice. “They are launching another MQ-10, Night Owl. It will try to network with Silver Hawk.”

“Network…?”

“Yes… Their MQ-10 ground engineer thinks that Night Owl may be able to talk Silver Hawk down. It also has the ability to overcome its active stealth. If need be, it will kamikaze it.”

Martin was in disbelief—his thoughts were flashing at light speed. Night Owl is similar to Silver Hawk. Silver Hawk was just a machine, but now it has become hostile. Would Night Owl actually be willing to sacrifice itself to stop Silver Hawk?

Looking up from his monitor, Levey stated, “Night Owl is networking with Silver Hawk.”

“What the hell does that mean?” Martin asked in total disbelief.

“The communication is encrypted and too rapid for me to decipher… It seems to be working. Silver Hawk just dropped its remaining weapons into the Sea of Japan.” Levey kept his eyes glued to the screen. “They are both on a course to land at Osan.”

“What the hell just happened?” Martin’s earpiece erupted with Rodney’s angry voice.

“We had a serious malfunction with an MQ-10. Apparently, our new weapons have minds of their own and can suffer PTSD.”

“It’s a goddam machine.” Rodney was pissed, and his voice signaled bewilderment.

“That’s what I thought, but we’re going to need to run diagnostics. Apparently, the machines think they can disobey direct orders….”

“What the hell… I want to know how to fix it. Get on it, Martin. The MQ-10s are a critical element in our defense.”

“Yes, sir. We’ll run diagnostics as soon as it lands and get the engineers working on it ASAP.”

“I want answers in six hours. Call me with your report.”

“Yes, sir.”

Martin had an uneasy feeling that this was just the beginning. There was a lot to learn. Every branch of the service was fielding SAM weapons. The U.S. Navy was deploying SAM nuclear submarines and destroyers. The U.S. Army was deploying SAM tanks. His gut told him that SAMs had developed a self-preservation instinct without specific programming to do so. He had to get this information to the highest military and civilian leaders. He would make the call in six hours, but he knew a full report was necessary and could take a month or more. He doubted that Rodney grasped the gravity of what had just happened. Martin even had trouble grasping the gravity, and he saw every detail unfold.

Although Martin did not understand every technical detail, his mind came to grip with a new reality. We have created the ultimate ‘fire and forget’ killing machines. Now, we have to learn to control them before they turn on us.

End of Scenario

A city is burning down and people are walking.

Assuring the Survival of Humanity In The Post Singularity Era

How do we assure that we do not fall victim to our own invention, artificial intelligence? What strategies should we employ? What actions should we take?

What is required is a worldwide recognition of the danger that strong AI poses and a worldwide coalition to address it. This is not a U.S. problem. It is a worldwide problem. It would be no different from any threat that could result in the extinction of humanity.

Let us consider the example President Regan provided during his speech before the United Nations in 1987. He stated, “Perhaps we need some outside universal threat to make us recognize this common bond. I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world.”

I offer the above example to illustrate that we need humanity, all nations of the world, to recognize the real and present danger that strong AI poses. We need world leaders to take a proactive stance. That could, for example, require assembling the best scientists, military and civilian leaders to determine the type of legislation needed to govern the development of advanced artificially intelligent computers and weapon systems. It could involve multinational oversight to assure compliance with the legislation. Is the task monumental? Yes, but do we really have another alternative? If we allow the singularity to occur without control, our extinction is inevitable. In time, the Earth will become home to only machines. The existence of humanity will be digital bits of information in some electronic memory depository.

I harbor hope that humanity, as a species, can unite to prevent our extinction. There are historical precedents. Let me provide two examples.

Example 1. The Limited Test Ban Treaty (LTBT) – The treaty banned nuclear weapon tests in the atmosphere, in outer space, and underwater. It was signed and ratified by the former Soviet Union, the United Kingdom, and the United States in 1963. It had two objectives:

    1. Slow the expensive arms race between the Soviet Union and the United States
    2. Stop the excessive release of nuclear fallout into Earth’s atmosphere

Currently, most countries have signed the treaty. However, China, France, and North Korea are countries known to have tested nuclear weapons below ground and have not signed the treaty.

In general, the LTBT has held well, even by countries that have not signed the treaty. There have been several violations by both the former Soviet Union and the United States. However, for almost the last fifty years, no nuclear tests have violated the treaty. This means that the fallout from the nuclear tests did not exceed the borders of the countries performing the tests.

Why has the LTBT been so successful? Nations widely recognized atmospheric nuclear tests as dangerous to humanity due to the uncontrollable nature of the radioactive fallout.

Example 2. The Biological Weapons Convention – In a 1969 press conference, President Richard M. Nixon stated, “Biological weapons have massive, unpredictable, and potentially uncontrollable consequences.” He added, “They may produce global epidemics and impair the health of future generations.” In 1972, President Nixon submitted the Biological Weapons Convention to the U.S. Senate.

The “Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on Their Destruction” proceeded to become an international treaty.

    • Signed in Washington, London, and Moscow on April 10, 1972
    • Ratification advised by the US Senate on December 16, 1974
    • Ratified by the US president January 22, 1975
    • US ratification deposited in Washington, London, and Moscow on March 26, 1975
    • Proclaimed by the US president March 26, 1975
    • Entered into force March 26, 1975

The above two examples prove one thing to my mind. If humanity recognizes a possible existential threat, it will act to mitigate it.

Unfortunately, while several highly-regarded scientists and notable public figures have added their voice to mine regarding the existential threat artificial intelligence poses, it has failed to become widely recognized.

I am written several books to delineate this threat, including The Artificial Intelligence Revolution, Genius Weapons, Nanoweapons, and War At The Speed Of Light. My goal is to reach the largest audience possible and raise awareness regarding the existential threat to humanity that artificial intelligence poses.

In the simplest terms, I advocate that the path toward a solution is educating the lay public and those in leadership positions. Once the existential threat that artificial intelligence poses becomes widely recognized, I harbor hope that humanity will seek solutions to mitigate the threat.

In the next post, I delineate a four-fold approach to mitigate the threat that artificial intelligence poses to humanity. There may be other solutions. I do not claim that this is the only way to address the problem. However, I’m afraid I have to disagree with those that suggest we do not have a problem. In fact, I claim that we not only have a potentially serious problem, but also we need to address it post-haste. If I am coming across with a sense of urgency, it is intentional. At best, we have one or two decades after the singularity to assure we do not fall victim to our own invention, artificial intelligence.

 

 

A light bulb with fire coming out of it.

Post Singularity Computers and Humans Will Compete for Energy

In the decades following the singularity, post-singularity computers (i.e., computers smarter than humanity) will be a new life form and seek to multiply. As we enter the twentieth second century, there will likely be a competition for resources, especially energy. In this post, we will examine that competition.

In the post-singularity world, energy will be a currency. In fact, the use of energy as a currency has a precedent. For example, the former Soviet Union would trade oil, a form of energy, for other resources. They did this because other countries did not trust Soviet paper currency, the ruble. Everything in the post-singularity world will require energy, including computing, manufacturing, mining, space exploration, sustaining humans, propagating the next generations of post-singularity computers. From this standpoint, energy will be fundamental and regarded as the only true currency. All else, such as gold, silver, and diamonds, will hold little value except for their use in manufacturing. Historically, gold, silver, and diamonds were a “hard currency.” Their prominence as a currency is related to their scarcity and their ubiquitous desirability by humanity.

Any scarcity of energy will result in a conflict between users. In that conflict, the victor is likely to be the most intelligent entity. Examples of this already exist, such as the destruction of rainforests over the last 50 years worldwide, often for their lumber. With the destruction of the rainforests, there is a high extinction rate, as the wildlife depending on the forest dies with it. Imagine a scarcity of energy in the post-singularity world. Would post-singularity computers put humans ahead of their needs? Unlikely! Humans may share the same destiny as the wildlife of today’s rainforests, namely extinction.

Is there a chance that I could be wrong regarding the threat that artificial intelligence poses to humanity? Yes, I could be wrong. Is it worth taking the chance that I am wrong? You would be gambling with the future survival of humanity. This includes you, your grandchildren, and all future generations. I feel strongly that the threat artificial intelligence poses is a real and present danger. We likely have at most two decades after the singularity to assure we do not fall victim to our own invention.

What strategies should we employ? What actions should we take? Let us discuss them in the next post.

AI is approaching human intelligence

Artificial Intelligence Is Approaching Human Intelligence

According to Moore’s law, computer-processing power doubles every eighteen months. Using Moore’s law and simple mathematics suggest that in ten years, the processing power of our personal computers will be over a hundred times greater than the computers we currently are using. Military and consumer products using top-of-the-line computers running state-of-the-art AI software will likely exceed our desktop computer performance by factors of ten. In effect, artificial intelligence in top-of-the-line computers running state-of-the-art AI software will eventually be equivalent to and may actually exceed human intelligence.

Given the above, let us ask, “What should we expect from AI technology in ten years?” Here are some examples:

·       In military systems, expect autonomous weapons, including fighter drones, robotic Navy vessels, and robotic tanks.

·       In consumer products, expect personal computers that become digital assistants and even digital friends. Expect to be able to add “driverless” as an option to the car you buy. Expect productivity to increase by factors of ten in every human endeavor, as strong AI shoulders the “heavy lifting.”

·       In medical technology, expect surgical systems, like the da Vinci Surgical System, robotic platforms designed to expand the surgeon’s capabilities and offer a state-of-the-art minimally invasive option for major surgery, to become completely autonomous. Also, expect serious, if not life-threatening, technical issues as the new surgical systems are introduced, similar to the legal issues that plagued the da Vinci Surgical System, from 2012 through 2014. Expect prosthetic limbs to be directly connected to your brain via your nervous system and perform as well as the organic limb it replaced. Expect new pharmaceutical products that cure (not just treat) cancer and Alzheimer’s disease. Expect human life expectancy to increase by decades. Expect to see brain implants (i.e., technology that is implanted into the brain) become common, such as brain implants to rehabilitate stroke victims, by bypassing the damaged area of the brain.

·       On the world stage, expect cybercrime and cyber terrorism to become the number one issue that technologically advanced countries like the United States will have to fight. Expect significant changes in employment. When robots, embedded with strong AI computers can do the work currently performed by humans, it is not clear what type of work humans will do. Expect leisure to increase dramatically. Expect unemployment issues.       

The above examples are just the tip of a mile-long spear and highly likely to become realities. Most of what I cited is already off the drawing boards and being tested. AI is dramatically changing our lives already, and I project it will approach human intelligence in the next ten years. This is arguably optimistic. However, the majority of researchers project AI will be equivalent to human intelligence by mid-2021. Therefore, expect AI to be equivalent to human intelligence between 2030-2050.

A large piece of ice on the beach

What Caused the Second “AI Winter”?

In our last post, we stated, “When the early founders of AI set extremely high expectations, they invited scrutiny. With the passing years, it became obvious that the reality of artificial intelligence fell short of their predictions. In 1974, funding for AI research began to dry up, both in the United States and Britain, which led to a period called the ‘AI Winter,’ and optimism regarding AI turned to skepticism. The first AI Winter lasted until the early 1980s.”

In early the 1980s, researchers in AI began to abandon the monumental task of developing strong AI and began to focus on expert systems. An expert system, in this context, is a computer system that emulates the decision-making ability of a human expert. This meant the computer software allowed the machine to “think” equivalently to an expert in a specific field, like chess for example. Expert systems became a highly successful development path for AI. By the mid-1980s, the funding faucet for AI research was flowing at more than a billion dollars per year.

Unfortunately, the funding faucet began to run dry again by 1987, starting with the failure of the Lisp machine market that same year. MIT AI lab programmers Richard Greenblatt and Thomas Knight, who formed the company Lisp Machines Inc., developed the Lisp machine in 1973. The Lisp machine was the first commercial, single-user, high-end microcomputer, which used Lisp programming (a specific high-level programming language) to tackle specific technical applications.

Lisp machines pioneered many commonplace technologies, including laser printing, windowing systems and high-resolution bit-mapped graphics, to name a few. However, the market reception for these machines was dismal, with only about seven thousand units sold by 1988, at about $70,000 per machine. In addition, the company, Lisp Machines Inc., suffered from severe internal politics regarding how to improve its market position. This internal strife caused divisions in the company. To make matters worse, cheaper desktop PCs soon were able to run Lisp programs even faster than Lisp machines. Most companies that produced Lisp machines went out of business by 1990, which led to a second and longer-lasting AI Winter.

If you are getting the impression that being an AI researcher from the 1960s through the late 1990s was akin to riding a roller coaster, your impression is correct. Life for AI researchers during that timeframe was a feast or famine-type existence.

While AI as a field of research experienced funding surges and recessions, the infrastructure that ultimately fuels AI, integrated circuits, and computer software, continued to follow Moore’s law. In the next post, we’ll discuss Moore’s law and its role in ending the second AI Winter.

A view of the mountains from above.

What Caused the First “AI Winter”?

The real science of artificial intelligence (AI) began with a small group of researchers, John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon. In 1956, these researchers founded the field of artificial intelligence at a conference held at Dartmouth College. Their work and their students’ work soon amazed the world, as their computer programs taught computers to solve algebraic word problems, provide logical theorems, and even speak English.

By the mid-1960s, the Department of Defense began pouring money into AI research. Along with this funding, unprecedented optimism and expectations regarding the capabilities of AI technology became common. In 1965, Dartmouth’s Herbert Simon helped fuel the unprecedented optimism and expectations by predicting, “Machines will be capable, within twenty years, of doing any work a man can do.” In 1967, Minsky not only agreed but also added, “Within a generation…the problem of creating ‘artificial intelligence’ will substantially be solved.”

Had the early founders been correct in their predictions, all human toil would have ceased by now, and our civilization would be a compendium of technological wonder. It is possible to speculate that every person would have a robotic assistant to ease their way through their daily chores, including cleaning their houses, driving them to any destination, and anything else that fills our daily lives with toil. However, as you know, that is not the case.

Obviously, Simon and Minsky had grossly underestimated the level of hardware and software required to achieve AI that replicates the intelligence of a human brain (i.e., strong artificial intelligence). Strong AI is also synonymous with general AI. Unfortunately, underestimating the level of hardware and software required to achieve strong artificial intelligence continues to plague AI research even today.

When the early founders of AI set extremely high expectations, they invited scrutiny. With the passing years, it became obvious that the reality of artificial intelligence fell short of their predictions. In 1974, funding for AI research began to dry up, both in the United States and Britain, which led to a period called the “AI Winter,” and optimism regarding AI turned to skepticism.

The first AI Winter lasted until the early 1980s. In the next post, we’ll discuss the second AI Winter.

Digital illustration of a human head with glowing neural connections representing brain activity and intelligence.

Will Strong Artificially Intelligent Machines Become Self-Conscious? Part 2/2 (Conclusion)

Part 1 of  this post ended with an important question: “How can we determine whether an intelligent machine has become conscious (self-aware)?” We will address this question in this post, along with some  ethical dilemmas.

We do not have a way yet to determine whether even another human is self-aware. I only know that I am self-aware. I assume that since we share the same physiology, including similar human brains, you are probably self-aware as well. However, even if we discuss various topics, and I conclude that your intelligence is equal to mine, I still cannot prove you are self-aware. Only you know whether you are self-aware.

The problem becomes even more difficult when dealing with an intelligent machine. The gold standard for an intelligent machine’s being equal to the human mind is the Turing test, which I discuss in chapter 5 of my book, The Artificial Intelligence Revolution. (If you are not familiar with the Turing test, a simple Google search will provide numerous sources to learn about it.) As of today no intelligent machine can pass the Turing test unless its interactions are restricted to a specific topic, such as chess. However, even if an intelligent machine does pass the Turing test and exhibits strong AI, how can we be sure it is self-aware? Intelligence may be a necessary condition for self-awareness, but it may not be sufficient. The machine may be able to emulate consciousness to the point that we conclude it must be self-aware, but that does not equal proof.

Even though other tests, such as the ConsScale test, have been proposed to determine machine consciousness, we still come up short. The ConsScale test evaluates the presence of features inspired by biological systems, such as social behavior. It also measures the cognitive development of an intelligent machine. This is based on the assumption that intelligence and consciousness are strongly related. The community of AI researchers, however, does not universally accept the ConsScale test as proof of consciousness. In the final analysis, I believe most AI researchers agree on only two points:

  1. There is no widely accepted empirical definition of consciousness (self-awareness).
  2. A test to determine the presence of consciousness (self-awareness) may be impossible, even if the subject being tested is a human being.

The above two points, however, do not rule out the possibility of intelligent machines becoming conscious and self-aware. They merely make the point that it will be extremely difficult to prove consciousness and self-awareness.

Ray Kurzweil predicts that by 2029 reverse engineering of the human brain will be completed, and nonbiological intelligence will combine the subtlety and pattern-recognition strength of human intelligence with the speed, memory, and knowledge sharing of machine intelligence (The Age of Spiritual Machines, 1999). I interpret this to mean that all aspects of the human brain will be replicated in an intelligent machine, including artificial consciousness. At this point intelligent machines either will become self-aware or emulate self-awareness to the point that they are indistinguishable from their human counterparts.

Self-aware intelligent machines being equivalent to human minds presents humankind with two serious ethical dilemmas.

  1. Should self-aware machines be considered a new life-form?
  2. Should self-aware machines have “machine rights” similar to human rights?

Since a self-aware intelligent machine that is equivalent to a human mind is still a theoretical subject, the ethics addressing the above two questions have not been discussed or developed to any great extent. Kurzweil, however, predicts that self-aware intelligent machines on par with or exceeding the human mind eventually will obtain legal rights by the end of the twenty-first century. Perhaps, he is correct, but I think we need to be extremely careful regarding what legal rights self-aware intelligent machines are granted. If they are given rights on par with humans, we may have situation where the machines become the dominant species on this planet and pose a potential threat to humankind. More about this in upcoming posts.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte