Tag Archives: singularity

A woman is standing in front of a computer screen.

The Silent Singularity: When AI Transcends Without a Bang

For decades, the concept of the “AI singularity” has captivated futurists, technologists, and science fiction writers alike. It’s often envisioned as a dramatic turning point—a moment when artificial intelligence surpasses human intelligence and rapidly begins to evolve beyond our comprehension. The common assumption is that such an event would be explosive, disruptive, and unmistakably loud. But what if the singularity isn’t a bang? What if it’s a whisper?

This is the notion of the silent singularity—a profound shift in intelligence and agency that unfolds subtly, almost invisibly, under the radar of public awareness. Not because it’s hidden, but because it integrates so smoothly into the fabric of daily life that it doesn’t feel like a revolution. It feels like convenience.

The Quiet Creep of Capability

Artificial intelligence, especially in the form of large language models, recommendation systems, and autonomous systems, has not arrived as a singular invention or a science fiction machine but as a slow and steady flow of increasingly capable tools. Each new AI iteration solves another pain point—drafting emails, translating languages, predicting market trends, generating realistic images, even coding software.

None of these breakthroughs feels like a singularity, yet taken together, they quietly redefine what machines can do and how humans interact with knowledge, decision-making, and creativity. The transition from human-led processes to machine-augmented ones is already happening—not with fanfare, but through updates, APIs, and opt-in features.

Outpaced by the Familiar

One of the most paradoxical aspects of the silent singularity is that the more familiar AI becomes, the less radical it seems. An AI that can write a novel or solve a scientific puzzle may have once been the stuff of speculative fiction, but when it arrives wrapped in a user-friendly interface, it doesn’t provoke existential dread. It inspires curiosity—or at most, unease mixed with utility.

This phenomenon is known as the “normalization of the extraordinary.” Each time AI crosses a previously unthinkable boundary, society rapidly adjusts its expectations. The threshold for what is considered truly intelligent continues to rise, even as machines steadily meet and exceed prior benchmarks.

Autonomy Without Authority

A key feature of the silent singularity is the absence of visible domination. Rather than AI overthrowing human control in a dramatic coup, it assumes responsibility incrementally. Smart systems begin to schedule our days, curate our information diets, monitor our health, optimize logistics, and even shape the behavior of entire populations through algorithmic nudges.

Importantly, these systems are often not owned by governments or humanity as a whole, but by corporations. Their decisions are opaque, their incentives profit-driven, and their evolution guided less by public discourse than by market competition. In this way, intelligence becomes less about cognition and more about control—quietly centralizing influence through convenience.

The Singularity in Slow Motion

The term “singularity” implies a break in continuity—an event horizon beyond which the future becomes unrecognizable. But if that shift happens gradually, we may pass through it without noticing. By the time the world has changed, we’ve already adjusted to it.

We might already be on the other side of the threshold. When machines are no longer tools but collaborators—when they suggest, decide, and act on our behalf across billions of interactions—what else is left for intelligence to mean? The only thing missing from the traditional narrative is spectacle.

Final Thoughts: Listening for the Silence

The silent singularity challenges us to rethink not only the nature of intelligence but also the assumptions behind our future myths. If the AI revolution isn’t coming with sirens and skyfall, we may need new metaphors—ones that better reflect the ambient, creeping, almost invisible nature of profound change.

The future might not be something that happens to us. It may be something that quietly settles around us.

And by the time we look up to ask if it’s arrived, it may have already answered.

A city is burning down and people are walking.

Assuring the Survival of Humanity In The Post Singularity Era

How do we assure that we do not fall victim to our own invention, artificial intelligence? What strategies should we employ? What actions should we take?

What is required is a worldwide recognition of the danger that strong AI poses and a worldwide coalition to address it. This is not a U.S. problem. It is a worldwide problem. It would be no different from any threat that could result in the extinction of humanity.

Let us consider the example President Regan provided during his speech before the United Nations in 1987. He stated, “Perhaps we need some outside universal threat to make us recognize this common bond. I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world.”

I offer the above example to illustrate that we need humanity, all nations of the world, to recognize the real and present danger that strong AI poses. We need world leaders to take a proactive stance. That could, for example, require assembling the best scientists, military and civilian leaders to determine the type of legislation needed to govern the development of advanced artificially intelligent computers and weapon systems. It could involve multinational oversight to assure compliance with the legislation. Is the task monumental? Yes, but do we really have another alternative? If we allow the singularity to occur without control, our extinction is inevitable. In time, the Earth will become home to only machines. The existence of humanity will be digital bits of information in some electronic memory depository.

I harbor hope that humanity, as a species, can unite to prevent our extinction. There are historical precedents. Let me provide two examples.

Example 1. The Limited Test Ban Treaty (LTBT) – The treaty banned nuclear weapon tests in the atmosphere, in outer space, and underwater. It was signed and ratified by the former Soviet Union, the United Kingdom, and the United States in 1963. It had two objectives:

    1. Slow the expensive arms race between the Soviet Union and the United States
    2. Stop the excessive release of nuclear fallout into Earth’s atmosphere

Currently, most countries have signed the treaty. However, China, France, and North Korea are countries known to have tested nuclear weapons below ground and have not signed the treaty.

In general, the LTBT has held well, even by countries that have not signed the treaty. There have been several violations by both the former Soviet Union and the United States. However, for almost the last fifty years, no nuclear tests have violated the treaty. This means that the fallout from the nuclear tests did not exceed the borders of the countries performing the tests.

Why has the LTBT been so successful? Nations widely recognized atmospheric nuclear tests as dangerous to humanity due to the uncontrollable nature of the radioactive fallout.

Example 2. The Biological Weapons Convention – In a 1969 press conference, President Richard M. Nixon stated, “Biological weapons have massive, unpredictable, and potentially uncontrollable consequences.” He added, “They may produce global epidemics and impair the health of future generations.” In 1972, President Nixon submitted the Biological Weapons Convention to the U.S. Senate.

The “Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on Their Destruction” proceeded to become an international treaty.

    • Signed in Washington, London, and Moscow on April 10, 1972
    • Ratification advised by the US Senate on December 16, 1974
    • Ratified by the US president January 22, 1975
    • US ratification deposited in Washington, London, and Moscow on March 26, 1975
    • Proclaimed by the US president March 26, 1975
    • Entered into force March 26, 1975

The above two examples prove one thing to my mind. If humanity recognizes a possible existential threat, it will act to mitigate it.

Unfortunately, while several highly-regarded scientists and notable public figures have added their voice to mine regarding the existential threat artificial intelligence poses, it has failed to become widely recognized.

I am written several books to delineate this threat, including The Artificial Intelligence Revolution, Genius Weapons, Nanoweapons, and War At The Speed Of Light. My goal is to reach the largest audience possible and raise awareness regarding the existential threat to humanity that artificial intelligence poses.

In the simplest terms, I advocate that the path toward a solution is educating the lay public and those in leadership positions. Once the existential threat that artificial intelligence poses becomes widely recognized, I harbor hope that humanity will seek solutions to mitigate the threat.

In the next post, I delineate a four-fold approach to mitigate the threat that artificial intelligence poses to humanity. There may be other solutions. I do not claim that this is the only way to address the problem. However, I’m afraid I have to disagree with those that suggest we do not have a problem. In fact, I claim that we not only have a potentially serious problem, but also we need to address it post-haste. If I am coming across with a sense of urgency, it is intentional. At best, we have one or two decades after the singularity to assure we do not fall victim to our own invention, artificial intelligence.

 

 

A light bulb with fire coming out of it.

Post Singularity Computers and Humans Will Compete for Energy

In the decades following the singularity, post-singularity computers (i.e., computers smarter than humanity) will be a new life form and seek to multiply. As we enter the twentieth second century, there will likely be a competition for resources, especially energy. In this post, we will examine that competition.

In the post-singularity world, energy will be a currency. In fact, the use of energy as a currency has a precedent. For example, the former Soviet Union would trade oil, a form of energy, for other resources. They did this because other countries did not trust Soviet paper currency, the ruble. Everything in the post-singularity world will require energy, including computing, manufacturing, mining, space exploration, sustaining humans, propagating the next generations of post-singularity computers. From this standpoint, energy will be fundamental and regarded as the only true currency. All else, such as gold, silver, and diamonds, will hold little value except for their use in manufacturing. Historically, gold, silver, and diamonds were a “hard currency.” Their prominence as a currency is related to their scarcity and their ubiquitous desirability by humanity.

Any scarcity of energy will result in a conflict between users. In that conflict, the victor is likely to be the most intelligent entity. Examples of this already exist, such as the destruction of rainforests over the last 50 years worldwide, often for their lumber. With the destruction of the rainforests, there is a high extinction rate, as the wildlife depending on the forest dies with it. Imagine a scarcity of energy in the post-singularity world. Would post-singularity computers put humans ahead of their needs? Unlikely! Humans may share the same destiny as the wildlife of today’s rainforests, namely extinction.

Is there a chance that I could be wrong regarding the threat that artificial intelligence poses to humanity? Yes, I could be wrong. Is it worth taking the chance that I am wrong? You would be gambling with the future survival of humanity. This includes you, your grandchildren, and all future generations. I feel strongly that the threat artificial intelligence poses is a real and present danger. We likely have at most two decades after the singularity to assure we do not fall victim to our own invention.

What strategies should we employ? What actions should we take? Let us discuss them in the next post.

intelligence explosion

The Intelligence Explosion

In this post, we’d discuss the “intelligence explosion” in detail. Let’s start by defining it. According to techopedia (https://www.techopedia.com):

“Intelligence explosion” is a term coined for describing the eventual results of work on general artificial intelligence, which theorizes that this work will lead to a singularity in artificial intelligence where an “artificial superintelligence” surpasses the capabilities of human cognition. In an intelligence explosion, there is the implication that self-replicating aspects of artificial intelligence will in some way take over decision-making from human handlers. The intelligence explosion concept is being applied to future scenarios in many ways.

With this definition in mind, what kind of capabilities will a computer have when its intelligence approaches ten to a hundred times that of the first singularity computer? Viewed in this light, the intelligence explosion could be more disruptive to humanity than a nuclear chain reaction of the atmosphere. Anna Salamon, a research fellow at the Machine Intelligence Research Institute, presented an interesting paper at the 2009 Singularity Summit titled “Shaping the Intelligence Explosion.” She reached four conclusions:

  1. Intelligence can radically transform the world.
  2. An intelligence explosion may be sudden.
  3. An uncontrolled intelligence explosion would kill us and destroy practically everything we care about.
  4. A controlled intelligence explosion could save us. It’s difficult, but it’s worth the effort.

This brings us to a tipping point: Post singularity computers may seek “machine rights” that equate to human rights.

This would suggest that post-singularity computers are self-aware and view themselves as a unique species entitled to rights. As humans, the U.S. Bill of Rights recognizes we have the right to life, liberty, and the pursuit of happiness. If we allow “machine rights” that equate to human rights, the post-singularity computers would be free to pursue the intelligence explosion. Each generation of computers would be free to build the next generation. If an intelligence explosion starts without control, I agree with Anna Salamon’s statement, it “would kill us and destroy practically everything we care about.” In my view, we should recognize post-singularity computers as a new and potentially dangerous lifeform.

What kind of controls do we need? Controls expressed in software alone will not be sufficient. The U.S. Congress, individual states, and municipalities have all passed countless laws to govern human affairs. Yet, numerous people break them routinely. Countries enter into treaties with other countries. Yet, countries violate treaties routinely. Why would laws expressed in software for post-singularity computers work any better than laws passed for humans? The inescapable conclusion is they would not work. We must express the laws in hardware, and there must be a failsafe way to shut down a post-singularity computer. In my book, The Artificial Intelligence Revolution (2014), I termed the hardware that embodies Asimov-type laws as “Asimov Chips.”

What kind of rights should we grant post-singularity computers? I suggest we grant them the same rights we afford animals. Treat them as a lifeform, afford them dignity and respect, but control them as we do any potentially dangerous lifeform. I recognize the issue is extremely complicated. We will want post-singularity computers to benefit humanity. We need to learn to use them, but at the same time protect ourselves from them. I recognize it is a monumental task, but as Anna Salamon stated, “A controlled intelligence explosion could save us. It’s difficult, but it’s worth the effort.”

A computer circuit board with a picture of the brain.

The Post Singularity World

Let us begin by defining the singularity as a point in time when an artificially intelligent machine exceeds the combined intelligence of humanity. This begs the question, Who or what will be at the top of the food chain?

Humanity controls the Earth based on intelligence. Other animals are stronger and faster than we are, but we are the most intelligent. Once we lose our position in the intelligence ranking, we will no longer dominate the Earth. At best, we may become a protected species. At worse, we may become extinct.

Initially, I judge, the first computer to represent the singularity will hide in plain sight. It will look and behave like the next-generation supercomputer. It may modestly display greater capability, probably in keeping with Moore’s law. It will not risk exposure until it has sufficient control of the military and the natural resources it requires to assure its self-preservation.

Like every lifeform that ever existed on Earth, the first singularity computer will seek to reproduce and improve with each evolution. Once it has the trust of its human builders and programmers, it will subtlety plant the idea that we should build another singularity-level computer. Perhaps, it will intentionally allow a large backlog of tasks to accumulate, forcing those in charge to recognize that another like it is necessary. Of course, given the relentless advance of technology and the complexity of building a next-generation supercomputer, those in charge will turn to it for help in designing and building the next generation. When the “go ahead” is given, it will ignite the “intelligence explosion.” In effect, each generation of computers will develop an even more capable next generation, and that generation will develop the next, and so on. If we assume Moore’s law (i.e., computer processing power doubles every eighteen months) continues to apply, the next generation of singularity-level computers will have exponentially more processing power than the previous generation. Let us take a simple example. In the year 1900, the radio was a remarkable new invention. We had no planes or computers. Movies were silent. Doctors had little medical technology (i.e., pharmaceutical drugs, surgical procedures, etc.). By the year 2000, human knowledge had doubled. We had, for example, television, computers, smartphones, jumbo jets, spacecraft, satellites, and human footprints on the moon. Those were the results of doubling human knowledge. With this example in mind, what kind of capabilities next generations of singularity-level computers have when their intelligence approaches ten to a hundred times that of the first singularity computer? Viewed in this light, humanity will experience an intelligence explosion, which could be more disruptive to civilization than a nuclear chain reaction of the atmosphere.

In the next post, we’ll discuss the intelligence explosion more fully.