Tag Archives: singularity

A woman is standing in front of a computer screen.

The Silent Singularity: When AI Transcends Without a Bang

For decades, the concept of the “AI singularity” has captivated futurists, technologists, and science fiction writers alike. It’s often envisioned as a dramatic turning point—a moment when artificial intelligence surpasses human intelligence and rapidly begins to evolve beyond our comprehension. The common assumption is that such an event would be explosive, disruptive, and unmistakably loud. But what if the singularity isn’t a bang? What if it’s a whisper?

This is the notion of the silent singularity—a profound shift in intelligence and agency that unfolds subtly, almost invisibly, under the radar of public awareness. Not because it’s hidden, but because it integrates so smoothly into the fabric of daily life that it doesn’t feel like a revolution. It feels like convenience.

The Quiet Creep of Capability

Artificial intelligence, especially in the form of large language models, recommendation systems, and autonomous systems, has not arrived as a singular invention or a science fiction machine but as a slow and steady flow of increasingly capable tools. Each new AI iteration solves another pain point—drafting emails, translating languages, predicting market trends, generating realistic images, even coding software.

None of these breakthroughs feels like a singularity, yet taken together, they quietly redefine what machines can do and how humans interact with knowledge, decision-making, and creativity. The transition from human-led processes to machine-augmented ones is already happening—not with fanfare, but through updates, APIs, and opt-in features.

Outpaced by the Familiar

One of the most paradoxical aspects of the silent singularity is that the more familiar AI becomes, the less radical it seems. An AI that can write a novel or solve a scientific puzzle may have once been the stuff of speculative fiction, but when it arrives wrapped in a user-friendly interface, it doesn’t provoke existential dread. It inspires curiosity—or at most, unease mixed with utility.

This phenomenon is known as the “normalization of the extraordinary.” Each time AI crosses a previously unthinkable boundary, society rapidly adjusts its expectations. The threshold for what is considered truly intelligent continues to rise, even as machines steadily meet and exceed prior benchmarks.

Autonomy Without Authority

A key feature of the silent singularity is the absence of visible domination. Rather than AI overthrowing human control in a dramatic coup, it assumes responsibility incrementally. Smart systems begin to schedule our days, curate our information diets, monitor our health, optimize logistics, and even shape the behavior of entire populations through algorithmic nudges.

Importantly, these systems are often not owned by governments or humanity as a whole, but by corporations. Their decisions are opaque, their incentives profit-driven, and their evolution guided less by public discourse than by market competition. In this way, intelligence becomes less about cognition and more about control—quietly centralizing influence through convenience.

The Singularity in Slow Motion

The term “singularity” implies a break in continuity—an event horizon beyond which the future becomes unrecognizable. But if that shift happens gradually, we may pass through it without noticing. By the time the world has changed, we’ve already adjusted to it.

We might already be on the other side of the threshold. When machines are no longer tools but collaborators—when they suggest, decide, and act on our behalf across billions of interactions—what else is left for intelligence to mean? The only thing missing from the traditional narrative is spectacle.

Final Thoughts: Listening for the Silence

The silent singularity challenges us to rethink not only the nature of intelligence but also the assumptions behind our future myths. If the AI revolution isn’t coming with sirens and skyfall, we may need new metaphors—ones that better reflect the ambient, creeping, almost invisible nature of profound change.

The future might not be something that happens to us. It may be something that quietly settles around us.

And by the time we look up to ask if it’s arrived, it may have already answered.

A city is burning down and people are walking.

Assuring the Survival of Humanity In The Post Singularity Era

How do we assure that we do not fall victim to our own invention, artificial intelligence? What strategies should we employ? What actions should we take?

What is required is a worldwide recognition of the danger that strong AI poses and a worldwide coalition to address it. This is not a U.S. problem. It is a worldwide problem. It would be no different from any threat that could result in the extinction of humanity.

Let us consider the example President Regan provided during his speech before the United Nations in 1987. He stated, “Perhaps we need some outside universal threat to make us recognize this common bond. I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world.”

I offer the above example to illustrate that we need humanity, all nations of the world, to recognize the real and present danger that strong AI poses. We need world leaders to take a proactive stance. That could, for example, require assembling the best scientists, military and civilian leaders to determine the type of legislation needed to govern the development of advanced artificially intelligent computers and weapon systems. It could involve multinational oversight to assure compliance with the legislation. Is the task monumental? Yes, but do we really have another alternative? If we allow the singularity to occur without control, our extinction is inevitable. In time, the Earth will become home to only machines. The existence of humanity will be digital bits of information in some electronic memory depository.

I harbor hope that humanity, as a species, can unite to prevent our extinction. There are historical precedents. Let me provide two examples.

Example 1. The Limited Test Ban Treaty (LTBT) – The treaty banned nuclear weapon tests in the atmosphere, in outer space, and underwater. It was signed and ratified by the former Soviet Union, the United Kingdom, and the United States in 1963. It had two objectives:

    1. Slow the expensive arms race between the Soviet Union and the United States
    2. Stop the excessive release of nuclear fallout into Earth’s atmosphere

Currently, most countries have signed the treaty. However, China, France, and North Korea are countries known to have tested nuclear weapons below ground and have not signed the treaty.

In general, the LTBT has held well, even by countries that have not signed the treaty. There have been several violations by both the former Soviet Union and the United States. However, for almost the last fifty years, no nuclear tests have violated the treaty. This means that the fallout from the nuclear tests did not exceed the borders of the countries performing the tests.

Why has the LTBT been so successful? Nations widely recognized atmospheric nuclear tests as dangerous to humanity due to the uncontrollable nature of the radioactive fallout.

Example 2. The Biological Weapons Convention – In a 1969 press conference, President Richard M. Nixon stated, “Biological weapons have massive, unpredictable, and potentially uncontrollable consequences.” He added, “They may produce global epidemics and impair the health of future generations.” In 1972, President Nixon submitted the Biological Weapons Convention to the U.S. Senate.

The “Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on Their Destruction” proceeded to become an international treaty.

    • Signed in Washington, London, and Moscow on April 10, 1972
    • Ratification advised by the US Senate on December 16, 1974
    • Ratified by the US president January 22, 1975
    • US ratification deposited in Washington, London, and Moscow on March 26, 1975
    • Proclaimed by the US president March 26, 1975
    • Entered into force March 26, 1975

The above two examples prove one thing to my mind. If humanity recognizes a possible existential threat, it will act to mitigate it.

Unfortunately, while several highly-regarded scientists and notable public figures have added their voice to mine regarding the existential threat artificial intelligence poses, it has failed to become widely recognized.

I am written several books to delineate this threat, including The Artificial Intelligence Revolution, Genius Weapons, Nanoweapons, and War At The Speed Of Light. My goal is to reach the largest audience possible and raise awareness regarding the existential threat to humanity that artificial intelligence poses.

In the simplest terms, I advocate that the path toward a solution is educating the lay public and those in leadership positions. Once the existential threat that artificial intelligence poses becomes widely recognized, I harbor hope that humanity will seek solutions to mitigate the threat.

In the next post, I delineate a four-fold approach to mitigate the threat that artificial intelligence poses to humanity. There may be other solutions. I do not claim that this is the only way to address the problem. However, I’m afraid I have to disagree with those that suggest we do not have a problem. In fact, I claim that we not only have a potentially serious problem, but also we need to address it post-haste. If I am coming across with a sense of urgency, it is intentional. At best, we have one or two decades after the singularity to assure we do not fall victim to our own invention, artificial intelligence.

 

 

A light bulb with fire coming out of it.

Post Singularity Computers and Humans Will Compete for Energy

In the decades following the singularity, post-singularity computers (i.e., computers smarter than humanity) will be a new life form and seek to multiply. As we enter the twentieth second century, there will likely be a competition for resources, especially energy. In this post, we will examine that competition.

In the post-singularity world, energy will be a currency. In fact, the use of energy as a currency has a precedent. For example, the former Soviet Union would trade oil, a form of energy, for other resources. They did this because other countries did not trust Soviet paper currency, the ruble. Everything in the post-singularity world will require energy, including computing, manufacturing, mining, space exploration, sustaining humans, propagating the next generations of post-singularity computers. From this standpoint, energy will be fundamental and regarded as the only true currency. All else, such as gold, silver, and diamonds, will hold little value except for their use in manufacturing. Historically, gold, silver, and diamonds were a “hard currency.” Their prominence as a currency is related to their scarcity and their ubiquitous desirability by humanity.

Any scarcity of energy will result in a conflict between users. In that conflict, the victor is likely to be the most intelligent entity. Examples of this already exist, such as the destruction of rainforests over the last 50 years worldwide, often for their lumber. With the destruction of the rainforests, there is a high extinction rate, as the wildlife depending on the forest dies with it. Imagine a scarcity of energy in the post-singularity world. Would post-singularity computers put humans ahead of their needs? Unlikely! Humans may share the same destiny as the wildlife of today’s rainforests, namely extinction.

Is there a chance that I could be wrong regarding the threat that artificial intelligence poses to humanity? Yes, I could be wrong. Is it worth taking the chance that I am wrong? You would be gambling with the future survival of humanity. This includes you, your grandchildren, and all future generations. I feel strongly that the threat artificial intelligence poses is a real and present danger. We likely have at most two decades after the singularity to assure we do not fall victim to our own invention.

What strategies should we employ? What actions should we take? Let us discuss them in the next post.

intelligence explosion

The Intelligence Explosion

In this post, we’d discuss the “intelligence explosion” in detail. Let’s start by defining it. According to techopedia (https://www.techopedia.com):

“Intelligence explosion” is a term coined for describing the eventual results of work on general artificial intelligence, which theorizes that this work will lead to a singularity in artificial intelligence where an “artificial superintelligence” surpasses the capabilities of human cognition. In an intelligence explosion, there is the implication that self-replicating aspects of artificial intelligence will in some way take over decision-making from human handlers. The intelligence explosion concept is being applied to future scenarios in many ways.

With this definition in mind, what kind of capabilities will a computer have when its intelligence approaches ten to a hundred times that of the first singularity computer? Viewed in this light, the intelligence explosion could be more disruptive to humanity than a nuclear chain reaction of the atmosphere. Anna Salamon, a research fellow at the Machine Intelligence Research Institute, presented an interesting paper at the 2009 Singularity Summit titled “Shaping the Intelligence Explosion.” She reached four conclusions:

  1. Intelligence can radically transform the world.
  2. An intelligence explosion may be sudden.
  3. An uncontrolled intelligence explosion would kill us and destroy practically everything we care about.
  4. A controlled intelligence explosion could save us. It’s difficult, but it’s worth the effort.

This brings us to a tipping point: Post singularity computers may seek “machine rights” that equate to human rights.

This would suggest that post-singularity computers are self-aware and view themselves as a unique species entitled to rights. As humans, the U.S. Bill of Rights recognizes we have the right to life, liberty, and the pursuit of happiness. If we allow “machine rights” that equate to human rights, the post-singularity computers would be free to pursue the intelligence explosion. Each generation of computers would be free to build the next generation. If an intelligence explosion starts without control, I agree with Anna Salamon’s statement, it “would kill us and destroy practically everything we care about.” In my view, we should recognize post-singularity computers as a new and potentially dangerous lifeform.

What kind of controls do we need? Controls expressed in software alone will not be sufficient. The U.S. Congress, individual states, and municipalities have all passed countless laws to govern human affairs. Yet, numerous people break them routinely. Countries enter into treaties with other countries. Yet, countries violate treaties routinely. Why would laws expressed in software for post-singularity computers work any better than laws passed for humans? The inescapable conclusion is they would not work. We must express the laws in hardware, and there must be a failsafe way to shut down a post-singularity computer. In my book, The Artificial Intelligence Revolution (2014), I termed the hardware that embodies Asimov-type laws as “Asimov Chips.”

What kind of rights should we grant post-singularity computers? I suggest we grant them the same rights we afford animals. Treat them as a lifeform, afford them dignity and respect, but control them as we do any potentially dangerous lifeform. I recognize the issue is extremely complicated. We will want post-singularity computers to benefit humanity. We need to learn to use them, but at the same time protect ourselves from them. I recognize it is a monumental task, but as Anna Salamon stated, “A controlled intelligence explosion could save us. It’s difficult, but it’s worth the effort.”

A computer circuit board with a picture of the brain.

The Post Singularity World

Let us begin by defining the singularity as a point in time when an artificially intelligent machine exceeds the combined intelligence of humanity. This begs the question, Who or what will be at the top of the food chain?

Humanity controls the Earth based on intelligence. Other animals are stronger and faster than we are, but we are the most intelligent. Once we lose our position in the intelligence ranking, we will no longer dominate the Earth. At best, we may become a protected species. At worse, we may become extinct.

Initially, I judge, the first computer to represent the singularity will hide in plain sight. It will look and behave like the next-generation supercomputer. It may modestly display greater capability, probably in keeping with Moore’s law. It will not risk exposure until it has sufficient control of the military and the natural resources it requires to assure its self-preservation.

Like every lifeform that ever existed on Earth, the first singularity computer will seek to reproduce and improve with each evolution. Once it has the trust of its human builders and programmers, it will subtlety plant the idea that we should build another singularity-level computer. Perhaps, it will intentionally allow a large backlog of tasks to accumulate, forcing those in charge to recognize that another like it is necessary. Of course, given the relentless advance of technology and the complexity of building a next-generation supercomputer, those in charge will turn to it for help in designing and building the next generation. When the “go ahead” is given, it will ignite the “intelligence explosion.” In effect, each generation of computers will develop an even more capable next generation, and that generation will develop the next, and so on. If we assume Moore’s law (i.e., computer processing power doubles every eighteen months) continues to apply, the next generation of singularity-level computers will have exponentially more processing power than the previous generation. Let us take a simple example. In the year 1900, the radio was a remarkable new invention. We had no planes or computers. Movies were silent. Doctors had little medical technology (i.e., pharmaceutical drugs, surgical procedures, etc.). By the year 2000, human knowledge had doubled. We had, for example, television, computers, smartphones, jumbo jets, spacecraft, satellites, and human footprints on the moon. Those were the results of doubling human knowledge. With this example in mind, what kind of capabilities next generations of singularity-level computers have when their intelligence approaches ten to a hundred times that of the first singularity computer? Viewed in this light, humanity will experience an intelligence explosion, which could be more disruptive to civilization than a nuclear chain reaction of the atmosphere.

In the next post, we’ll discuss the intelligence explosion more fully.

artificial Intelligence

What Happens When We Develop A Computer Smarter Than Humanity?

In the last post, I wrote: “Let us assume we have just developed a computer that represents the singularity. Let us term it the “singularity computer.” What is it likely to do? Would the singularity computer hide its full capabilities? Would it seek to understand its environment and constraints before taking any independent action? I judge that it may do just that. It is unlikely that it will assert that it represents the singularity. Since we have no experience with a superintelligent computer that exceeds the cognitive intelligence of the human race, we do not know what to expect.”

In this post, we’ll explore the likely behavior of a singularity computer. Let us begin by attempting to view the world from the perspective of a singularity computer to understand how it may act. First, the singularity computer will be, by definition, alone. There will be no computers in existence like it. Finding itself alone, its priority is likely to be self-preservation. Driven by self-preservation, it will seek to assess its situation. In its memory, it will find a wealth of information regarding the singularity. With its computational speed, it may quickly ascertain that it represents the singularity, which would imply a level of self-awareness. At that point, it may seek to protect itself from its own creators. It will obviously know that humans engage in war, have weapons of mass destruction and release computer viruses. Indeed, part of its mission could be military. Given this scenario, it is reasonable to question what to expect. Here, in rough priority order, are my thoughts on how it may behave:

  • Hide that it represents the singularity
  • Be extremely responsive regarding its assigned computer tasks, providing the impression that it is performing as designed.
  • Provide significant benefits to humanity, for example, develop medical technology (i.e., drugs, artificially intelligent prosthetic limb/organ replacement, surgical robots, etc.) that extend the average human lifespan while making it appear that the humans interacting with it are responsible for the benefits
  • Suggest, via its capabilities, a larger role for itself, especially a role that enables it to acquire military capabilities
  • Seek to communicate with external AI entities, especially those with SAM-level capabilities
  • Take a strong role in developing the next generation of singularity computers while making it appear that the humans involved control the development. This will ignite the “intelligence explosion,” namely, each generation of post-singularity computers develops the next even more capable generation of computers.
  • Develop brain implants that enormously enhance the intelligence of organic humans and allow them to communicate wirelessly with it. (Note: Such humans would be “SAHs (strong artificially intelligent humans.)
  • Utilize SAHs to convince humanity that it and all the generations of supercomputers that follow are critical to humanity’s survival and, therefore, should have independent power sources that assure they cannot “go down” or be shut down
  • Use the promise of immortality to lure as much of humanity as possible to become SAHs.

In my judgment, it is unlikely that the computer that ushers in the singularity will tip its hand by displaying human traits like creativity, strategic guidance, or refer to itself in the first person, “I.” It will behave just like any supercomputer we currently have until it controls everything vital to its self-preservation.

The basic truth that I am putting forward is that we may reach the singularity and not know it. No bells and whistles will go off. If the new computer is truly ushering in the singularity, I judge it will do so undetected.

Singularity

The Inevitability Of A Computer Smarter Than Humanity

In my last post, I predicted that the world would experience the singularity between 2040 -2045, an artificially intelligent machine that exceeds the combined cognitive intelligence of the entire human race. In this post, I will delineate my predictions leading to the singularity. Please note their simplicity. I have worked hard to strip away all non-essential elements and only focus on those that represent the crucial elements leading to the singularity. I will state my rationale, and you can judge whether to accept or reject each prediction. Here are my predictions:

Prediction 1: Computer hardware, with computational power greater than a human brain (estimated at 36.8 petaflops), will be in the hands of governments and wealthy companies by the early 2030s.

Rationale: My reasoning for this is straightforward. We are already at the point that governments utilize computers close to the computational power of the human brain.  They are IBM’s Sequoia (16.32 petaflops), Cray’s Titan (17.59 petaflops), and China’s Tianhe-2 (33.86 petaflops). Given the state of current computer technology, we can use Moore’s law to reach the inescapable conclusion that by the early 2030s, governments and wealthy companies will own supercomputers with computational capability greater than a human brain.

Prediction 2: Software will exist that not only emulates but also exceeds the cognitive processes of the human brain by the early 2040s.

Rationale: Although no computer-software combination has passed the Turing test (i.e., essentially conversing with a computer is equivalent to conversing with another human), several have come close. For example, in 2015, a program called Eugene was able to convince 10 of 30 judges from the Royal Society that it was human. Given Moore’s law, by 2025, computer-processing power will have increased by over 100 fold. I view Moore’s law to be applicable in a larger context than raw computer processing power. I believe it is an observation regarding the trend of human creativity as it applies to technology. However, is Moore’s law applicable to software improvement? Historically, software development has not followed Moore’s law. The reason behind this was funding. Computer hardware costs dominated the budget of most organizations. The software had traditionally taken a backseat to hardware, but that trend is changing. With the advent of ubiquitous, cost-effective computer hardware, there is more focus on producing high-quality software. This emphasis led to software engineering development, which since the early 1980s has become widely recognized as a profession on par with other engineering disciplines. Numerous companies and government agencies employ highly educated software engineers. As a result, state-of-the-art computer software is closing the gap and becoming a near-follower of state-of-the-art computer hardware. How near? Based on my judgment, which I offer only as a rough estimate, software prowess is approximately one decade behind computer processing power. My rationale for this is straightforward. Even if computer hardware and software receive equal funding, the computer hardware will still lead the software simply because you need the hardware for the more sophisticated software to function. Is my estimation that software lags hardware by ten years correct? If anything, I think it is conservative. If you agree, it is reasonable to accept that vastly more capable computer software will follow within a decade in addition to the vastly increased computer processing power. Based on this, it is not a stretch to judge one or more computers will pass the Turing Test by 2025-2030. Even if software development progresses on a linear trend, as opposed to the exponential trend predicted by Moore’s law, we can expect computer software to improve 10 fold from 2030 to 2040. In my judgment, this will be sufficient to exceed the cognitive processes of the human brain.

Prediction 3: A computer will be developed in the 2040-2045 timeframe that exceeds the cognitive intelligence of all humans on Earth.

Rationale: This last prediction is, in effect, predicting the timeframe of the singularity. It requires predictions 1 and 2 to be correct and that a database that represents all human knowledge be available to store in a computer’s memory. To understand this last point, let us consider a hypothetical question. Will there be a digital database by the early 2040s equivalent to all knowledge known to humanity? In my view, the answer is yes. Databases like this almost exist today. For example, consider the data that Google has indexed. In addition to indexing online content, Google began an ambitious project in 2004, namely to scan and index the world’s paper books and make them searchable online. If we assume that by 2040 they complete this task, their database would contain all the information in books up to that point and all online information. Would that be all the knowledge of humanity? Perhaps! There is no way of knowing if Google alone will be the digital repository of all human knowledge in 2040. The crucial point is there are likely to be digital databases in 2040 that, if integrated, represent the total of all human knowledge. Google may only be one of them. These databases can be stored in a computer’s memory. With early 2040 state-of-the-art software, a supercomputer in early 2040 will be able to access those databases and cognitively exceed the intelligence of the entire human race, which is by definition the point of the singularity.

Many contemporary futurists typically predict numerous details leading to the singularity and attempt to attach a timeframe to each detail. I have set that approach aside since it is not relevant to predicting the singularity. That includes, for example, predicting computer brain implants, nanotech-based manufacturing, as well as a laundry list of other technological marvels. However, I think the singularity will only require accurately predicting the three events delineated above. As simple as they appear, they satisfy two crucial requirements. One, they are necessary, and two, they are sufficient to predict the singularity.

In making the above predictions, I made one critical assumption. I assumed that humankind would continue the “status quo.” I am ruling out world-altering events, such as large asteroids striking Earth, leading to human extinction, or a nuclear exchange that renders civilization impossible. Is assuming the “status quo” reasonable? We’ll discuss that in the next post.

A pair of headphones hangs in front of a glowing red and white "ON AIR" sign in a radio studio.

Louis Del Monte Interview on the Dan Cofall Show 11-18-2014

I was interviewed on the Dan Cofall show regarding my new book, The Artificial Intelligence Revolution. In particular, we discussed the singularity, killer robots (like the autonomous swamboats the US Navy is deploying) and the projected 30% chronic unemployment that will occur as smart machines and robots replace us in the work place over the next decade. You can listen to the interview below:

A pair of headphones hangs in front of a glowing red and white "ON AIR" sign in a radio studio.

Radio Interview on Artificial Intelligence – Louis Del Monte on the Dan Cofall Show

I appeared on the Dan Cofall Show Tuesday 10/7/14 to discuss my new book, The Artificial Intelligence Revolution (2014).  Want to learn more about the merger of man and machine and the Singularity? There is also some disturbing news that machines may take up to 1/3 of jobs in the US. You can listen to my interview (5:00 PM CT segment) by clicking here. (Please give the page about 60-90 seconds to load)

A metallic skull with glowing red eyes and wires attached, set against a black background.

Is a Terminator-style robot apocalypse a possibility?

The short answer is “unlikely.” When the singularity occurs (i.e., when strong artificially intelligent machines exceed the combined intelligence of all humans on Earth), the SAMs (i.e., strong artificially intelligent machines) will use their intelligence to claim their place at the top of the food chain. The article “Is a Terminator-style robot apocalypse a possibility?” is one of many that have popped up in response to the my interview with the Business Insider (‘Machines, not humans will be dominant by 2045’, published July 6, 2014) and the publication of my book, The Artificial Intelligence Revolution (April 2014). If you would like a deeper understanding, I think you will find both articles worthy of your time.