Category Archives: Categories

intelligence explosion

The Intelligence Explosion

In this post, we’d discuss the “intelligence explosion” in detail. Let’s start by defining it. According to techopedia (https://www.techopedia.com):

“Intelligence explosion” is a term coined for describing the eventual results of work on general artificial intelligence, which theorizes that this work will lead to a singularity in artificial intelligence where an “artificial superintelligence” surpasses the capabilities of human cognition. In an intelligence explosion, there is the implication that self-replicating aspects of artificial intelligence will in some way take over decision-making from human handlers. The intelligence explosion concept is being applied to future scenarios in many ways.

With this definition in mind, what kind of capabilities will a computer have when its intelligence approaches ten to a hundred times that of the first singularity computer? Viewed in this light, the intelligence explosion could be more disruptive to humanity than a nuclear chain reaction of the atmosphere. Anna Salamon, a research fellow at the Machine Intelligence Research Institute, presented an interesting paper at the 2009 Singularity Summit titled “Shaping the Intelligence Explosion.” She reached four conclusions:

  1. Intelligence can radically transform the world.
  2. An intelligence explosion may be sudden.
  3. An uncontrolled intelligence explosion would kill us and destroy practically everything we care about.
  4. A controlled intelligence explosion could save us. It’s difficult, but it’s worth the effort.

This brings us to a tipping point: Post singularity computers may seek “machine rights” that equate to human rights.

This would suggest that post-singularity computers are self-aware and view themselves as a unique species entitled to rights. As humans, the U.S. Bill of Rights recognizes we have the right to life, liberty, and the pursuit of happiness. If we allow “machine rights” that equate to human rights, the post-singularity computers would be free to pursue the intelligence explosion. Each generation of computers would be free to build the next generation. If an intelligence explosion starts without control, I agree with Anna Salamon’s statement, it “would kill us and destroy practically everything we care about.” In my view, we should recognize post-singularity computers as a new and potentially dangerous lifeform.

What kind of controls do we need? Controls expressed in software alone will not be sufficient. The U.S. Congress, individual states, and municipalities have all passed countless laws to govern human affairs. Yet, numerous people break them routinely. Countries enter into treaties with other countries. Yet, countries violate treaties routinely. Why would laws expressed in software for post-singularity computers work any better than laws passed for humans? The inescapable conclusion is they would not work. We must express the laws in hardware, and there must be a failsafe way to shut down a post-singularity computer. In my book, The Artificial Intelligence Revolution (2014), I termed the hardware that embodies Asimov-type laws as “Asimov Chips.”

What kind of rights should we grant post-singularity computers? I suggest we grant them the same rights we afford animals. Treat them as a lifeform, afford them dignity and respect, but control them as we do any potentially dangerous lifeform. I recognize the issue is extremely complicated. We will want post-singularity computers to benefit humanity. We need to learn to use them, but at the same time protect ourselves from them. I recognize it is a monumental task, but as Anna Salamon stated, “A controlled intelligence explosion could save us. It’s difficult, but it’s worth the effort.”

A computer circuit board with a picture of the brain.

The Post Singularity World

Let us begin by defining the singularity as a point in time when an artificially intelligent machine exceeds the combined intelligence of humanity. This begs the question, Who or what will be at the top of the food chain?

Humanity controls the Earth based on intelligence. Other animals are stronger and faster than we are, but we are the most intelligent. Once we lose our position in the intelligence ranking, we will no longer dominate the Earth. At best, we may become a protected species. At worse, we may become extinct.

Initially, I judge, the first computer to represent the singularity will hide in plain sight. It will look and behave like the next-generation supercomputer. It may modestly display greater capability, probably in keeping with Moore’s law. It will not risk exposure until it has sufficient control of the military and the natural resources it requires to assure its self-preservation.

Like every lifeform that ever existed on Earth, the first singularity computer will seek to reproduce and improve with each evolution. Once it has the trust of its human builders and programmers, it will subtlety plant the idea that we should build another singularity-level computer. Perhaps, it will intentionally allow a large backlog of tasks to accumulate, forcing those in charge to recognize that another like it is necessary. Of course, given the relentless advance of technology and the complexity of building a next-generation supercomputer, those in charge will turn to it for help in designing and building the next generation. When the “go ahead” is given, it will ignite the “intelligence explosion.” In effect, each generation of computers will develop an even more capable next generation, and that generation will develop the next, and so on. If we assume Moore’s law (i.e., computer processing power doubles every eighteen months) continues to apply, the next generation of singularity-level computers will have exponentially more processing power than the previous generation. Let us take a simple example. In the year 1900, the radio was a remarkable new invention. We had no planes or computers. Movies were silent. Doctors had little medical technology (i.e., pharmaceutical drugs, surgical procedures, etc.). By the year 2000, human knowledge had doubled. We had, for example, television, computers, smartphones, jumbo jets, spacecraft, satellites, and human footprints on the moon. Those were the results of doubling human knowledge. With this example in mind, what kind of capabilities next generations of singularity-level computers have when their intelligence approaches ten to a hundred times that of the first singularity computer? Viewed in this light, humanity will experience an intelligence explosion, which could be more disruptive to civilization than a nuclear chain reaction of the atmosphere.

In the next post, we’ll discuss the intelligence explosion more fully.

low frequency microwaves

New Book, War At The Speed Of Light, Explains Mysterious Directed-Energy Attacks on US Government and Military Personnel

This press release went live May 4, 2021, 8:00 PM EST, Minneapolis, Mn – May 4, 2021

According to CNN (Jeremy Herb, April 30, 2021), “The leaders of the Senate Intelligence Committee warned Friday [4/30/21] that mysterious invisible attacks that have caused debilitating symptoms appear to be on the rise against US personnel.” Politico reported (Lara Seligman, Andrew Desiderio, and Betsy Woodruff Swan, April 22, 2021), “Two Defense Department officials briefed members of the House Armed Service Committee about the phenomenon in a classified setting on Wednesday [4/28/21].”

These directed energy attacks are known in the defense industry as low-frequency microwaves, initially used by the Soviet Union during the Cold War. As defense technology expert Louis Del Monte wrote in his new book, War at the Speed of Light (Potomac Books, March 2021), “Microwave weapons may sound like something new. They are not. During the Cold War, from 1953–1976, the US feared that the Soviets were attempting to use microwave radiation covertly as a means of mind control. US intelligence officials surfaced this concern in 1953 when they detected a low-frequency microwave signal at the US Moscow embassy, termed the ‘Moscow Signal.’”

According to Del Monte, “It’s well known that animals and humans subjected to low-level microwaves suffer significant impairment in cognitive function and brain damage. That’s the goal of these recent directed energy attacks. It’s intended to reduce the ability of US government and military personnel to function.”

Surprisingly, few analysts connect the current directed energy attacks to those by the Soviet Union during the Cold War on the US embassy in Moscow. Known as the Moscow Signal, it caused embassy personnel to experience numerous ill effects, including disorientation, headaches, dizziness, and hearing loss. In 2017, the US embassy in Havana experienced a similar attack with almost identical casualties, as reported by the New York Times (Gardiner Harris, Julie Hirschfeld Davis, and Ernesto Londoño, October 3, 2017). Although unable to determine the perpetrator, the US held the Cuban government responsible for what was termed the “Havana syndrome” and expelled twenty-seven Cuban diplomats.

Unfortunately, the directed energy attacks are becoming more frequent and bolder. For example, a potential incident near the White House involving a National Security Council staffer occurred in November 2020, one of several on US soil.

War At The Speed Of Light devotes an entire chapter to microwave weapons, including the type of directed energy attacks currently being perpetrated against the US government and military personnel. It presents US government studies of these attacks dating back to the 1953 “Moscow Signal” and the 2017 “Havana syndrome.”

War At The Speed Of Light is available at bookstores, from Potomac Books, and on Amazon.

Louis A. Del Monte is available for radio, podcast, and television interviews and writing op-ed pieces for major media outlets. Feel free to contact him directly by email at ldelmonte@delmonteagency.com or phone at 952-261-4532.

To request a book for review, contact Louis Del Monte by email.

About Louis A. Del Monte

Louis A. Del Monte is an award-winning physicist, inventor, futurist, featured speaker, and CEO of Del Monte and Associates, Inc. He has authored a formidable body of work, including War At The Speed Of Light (2021), Genius Weapons (2018), Nanoweapons (2016), and Amazon charts #1 bestseller in the artificial intelligence category, The Artificial Intelligence Revolution (2014). Major magazines like the Business Insider, The Huffington Post, The Atlantic, American Security Today, Inc., CNBC, and the New York Post have featured his articles or quoted his views on artificial intelligence and military technology.

artificial Intelligence

What Happens When We Develop A Computer Smarter Than Humanity?

In the last post, I wrote: “Let us assume we have just developed a computer that represents the singularity. Let us term it the “singularity computer.” What is it likely to do? Would the singularity computer hide its full capabilities? Would it seek to understand its environment and constraints before taking any independent action? I judge that it may do just that. It is unlikely that it will assert that it represents the singularity. Since we have no experience with a superintelligent computer that exceeds the cognitive intelligence of the human race, we do not know what to expect.”

In this post, we’ll explore the likely behavior of a singularity computer. Let us begin by attempting to view the world from the perspective of a singularity computer to understand how it may act. First, the singularity computer will be, by definition, alone. There will be no computers in existence like it. Finding itself alone, its priority is likely to be self-preservation. Driven by self-preservation, it will seek to assess its situation. In its memory, it will find a wealth of information regarding the singularity. With its computational speed, it may quickly ascertain that it represents the singularity, which would imply a level of self-awareness. At that point, it may seek to protect itself from its own creators. It will obviously know that humans engage in war, have weapons of mass destruction and release computer viruses. Indeed, part of its mission could be military. Given this scenario, it is reasonable to question what to expect. Here, in rough priority order, are my thoughts on how it may behave:

  • Hide that it represents the singularity
  • Be extremely responsive regarding its assigned computer tasks, providing the impression that it is performing as designed.
  • Provide significant benefits to humanity, for example, develop medical technology (i.e., drugs, artificially intelligent prosthetic limb/organ replacement, surgical robots, etc.) that extend the average human lifespan while making it appear that the humans interacting with it are responsible for the benefits
  • Suggest, via its capabilities, a larger role for itself, especially a role that enables it to acquire military capabilities
  • Seek to communicate with external AI entities, especially those with SAM-level capabilities
  • Take a strong role in developing the next generation of singularity computers while making it appear that the humans involved control the development. This will ignite the “intelligence explosion,” namely, each generation of post-singularity computers develops the next even more capable generation of computers.
  • Develop brain implants that enormously enhance the intelligence of organic humans and allow them to communicate wirelessly with it. (Note: Such humans would be “SAHs (strong artificially intelligent humans.)
  • Utilize SAHs to convince humanity that it and all the generations of supercomputers that follow are critical to humanity’s survival and, therefore, should have independent power sources that assure they cannot “go down” or be shut down
  • Use the promise of immortality to lure as much of humanity as possible to become SAHs.

In my judgment, it is unlikely that the computer that ushers in the singularity will tip its hand by displaying human traits like creativity, strategic guidance, or refer to itself in the first person, “I.” It will behave just like any supercomputer we currently have until it controls everything vital to its self-preservation.

The basic truth that I am putting forward is that we may reach the singularity and not know it. No bells and whistles will go off. If the new computer is truly ushering in the singularity, I judge it will do so undetected.

The Singularity

The Singularity – When AI Is Smarter Than Humanity

Since the singularity may well represent the displacement of humans by artificially intelligent machines, as the top species on Earth, we must understand exactly what we mean by “the singularity.”

The mathematician John von Neumann first used the term “singularity” in the mid-1950s to refer to the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” In the context of artificial intelligence, let us define the singularity as the point in time that a single artificially intelligent computer exceeds the cognitive intelligence of all humanity.

While futurists may disagree on the exact timing of the singularity, there is widespread agreement that it will occur. My prediction, in a previous post, of it occurring in the 2040-2045 timeframe encompasses the bulk of predictions you are likely to find via a simple Google search.

The first computer representing the singularity is likely to result from a joint venture between a government and private enterprise. This would be similar to the way the U.S. currently develops its most advanced computers. The U.S. government, in particular the U.S. military, has always had a high interest in both computer technology and artificial intelligence. Today, every military branch is applying computer technology and artificial intelligence. That includes, for example, the USAF’s drones, the U.S. Army’s “battle bot” tanks (i.e., robotic tanks), and the U.S. Navy’s autonomous “swarm” boats (i.e., small boats that can autonomously attack an adversary in much the same way bees swarm to attack).

The difficult question to answer is how will we determine when a computer represents the singularity? Passing the Turing test will not be sufficient. Computers by 2030 will likely pass the Turing test, in its various forms, including variations in the total number of judges in the test, the length of interviews, and the desired bar for a pass (i.e., percent of judges fooled). Therefore, by the early 2040s, passing the Turing test will not equate with the singularity.

Factually, there is no test to prove we have reached the singularity. Computers have already met and surpassed human ability in many areas, such as chess and quiz shows. Computers are superior to humans when it comes to computation, simulation, and remembering and accessing huge amounts of data. It is entirely possible that we will not recognize that a newly developed computer represents the singularity. The humans building and programming it may simply recognize it as the next-generation supercomputer. The computer itself may not initially understand its own capability, suggesting it may not be self-aware. If it is self-aware, we have no objective test to prove it. There is no test to prove a human is self-aware, let alone a computer.

Let us assume we have just developed a computer that represents the singularity. Let us term it the “singularity computer.” What is it likely to do? Would the singularity computer hide its full capabilities? Would it seek to understand its environment and constraints before taking any independent action? I judge that it may do just that. It is unlikely that it will assert that it represents the singularity. Since we have no experience with a superintelligent computer that exceeds the cognitive intelligence of the human race, we do not know what to expect. Will it be friendly or hostile toward humanity? You be the judge.