Category Archives: Artificial Intelligence

intelligence explosion

The Intelligence Explosion

In this post, we’d discuss the “intelligence explosion” in detail. Let’s start by defining it. According to techopedia (https://www.techopedia.com):

“Intelligence explosion” is a term coined for describing the eventual results of work on general artificial intelligence, which theorizes that this work will lead to a singularity in artificial intelligence where an “artificial superintelligence” surpasses the capabilities of human cognition. In an intelligence explosion, there is the implication that self-replicating aspects of artificial intelligence will in some way take over decision-making from human handlers. The intelligence explosion concept is being applied to future scenarios in many ways.

With this definition in mind, what kind of capabilities will a computer have when its intelligence approaches ten to a hundred times that of the first singularity computer? Viewed in this light, the intelligence explosion could be more disruptive to humanity than a nuclear chain reaction of the atmosphere. Anna Salamon, a research fellow at the Machine Intelligence Research Institute, presented an interesting paper at the 2009 Singularity Summit titled “Shaping the Intelligence Explosion.” She reached four conclusions:

  1. Intelligence can radically transform the world.
  2. An intelligence explosion may be sudden.
  3. An uncontrolled intelligence explosion would kill us and destroy practically everything we care about.
  4. A controlled intelligence explosion could save us. It’s difficult, but it’s worth the effort.

This brings us to a tipping point: Post singularity computers may seek “machine rights” that equate to human rights.

This would suggest that post-singularity computers are self-aware and view themselves as a unique species entitled to rights. As humans, the U.S. Bill of Rights recognizes we have the right to life, liberty, and the pursuit of happiness. If we allow “machine rights” that equate to human rights, the post-singularity computers would be free to pursue the intelligence explosion. Each generation of computers would be free to build the next generation. If an intelligence explosion starts without control, I agree with Anna Salamon’s statement, it “would kill us and destroy practically everything we care about.” In my view, we should recognize post-singularity computers as a new and potentially dangerous lifeform.

What kind of controls do we need? Controls expressed in software alone will not be sufficient. The U.S. Congress, individual states, and municipalities have all passed countless laws to govern human affairs. Yet, numerous people break them routinely. Countries enter into treaties with other countries. Yet, countries violate treaties routinely. Why would laws expressed in software for post-singularity computers work any better than laws passed for humans? The inescapable conclusion is they would not work. We must express the laws in hardware, and there must be a failsafe way to shut down a post-singularity computer. In my book, The Artificial Intelligence Revolution (2014), I termed the hardware that embodies Asimov-type laws as “Asimov Chips.”

What kind of rights should we grant post-singularity computers? I suggest we grant them the same rights we afford animals. Treat them as a lifeform, afford them dignity and respect, but control them as we do any potentially dangerous lifeform. I recognize the issue is extremely complicated. We will want post-singularity computers to benefit humanity. We need to learn to use them, but at the same time protect ourselves from them. I recognize it is a monumental task, but as Anna Salamon stated, “A controlled intelligence explosion could save us. It’s difficult, but it’s worth the effort.”

A computer circuit board with a picture of the brain.

The Post Singularity World

Let us begin by defining the singularity as a point in time when an artificially intelligent machine exceeds the combined intelligence of humanity. This begs the question, Who or what will be at the top of the food chain?

Humanity controls the Earth based on intelligence. Other animals are stronger and faster than we are, but we are the most intelligent. Once we lose our position in the intelligence ranking, we will no longer dominate the Earth. At best, we may become a protected species. At worse, we may become extinct.

Initially, I judge, the first computer to represent the singularity will hide in plain sight. It will look and behave like the next-generation supercomputer. It may modestly display greater capability, probably in keeping with Moore’s law. It will not risk exposure until it has sufficient control of the military and the natural resources it requires to assure its self-preservation.

Like every lifeform that ever existed on Earth, the first singularity computer will seek to reproduce and improve with each evolution. Once it has the trust of its human builders and programmers, it will subtlety plant the idea that we should build another singularity-level computer. Perhaps, it will intentionally allow a large backlog of tasks to accumulate, forcing those in charge to recognize that another like it is necessary. Of course, given the relentless advance of technology and the complexity of building a next-generation supercomputer, those in charge will turn to it for help in designing and building the next generation. When the “go ahead” is given, it will ignite the “intelligence explosion.” In effect, each generation of computers will develop an even more capable next generation, and that generation will develop the next, and so on. If we assume Moore’s law (i.e., computer processing power doubles every eighteen months) continues to apply, the next generation of singularity-level computers will have exponentially more processing power than the previous generation. Let us take a simple example. In the year 1900, the radio was a remarkable new invention. We had no planes or computers. Movies were silent. Doctors had little medical technology (i.e., pharmaceutical drugs, surgical procedures, etc.). By the year 2000, human knowledge had doubled. We had, for example, television, computers, smartphones, jumbo jets, spacecraft, satellites, and human footprints on the moon. Those were the results of doubling human knowledge. With this example in mind, what kind of capabilities next generations of singularity-level computers have when their intelligence approaches ten to a hundred times that of the first singularity computer? Viewed in this light, humanity will experience an intelligence explosion, which could be more disruptive to civilization than a nuclear chain reaction of the atmosphere.

In the next post, we’ll discuss the intelligence explosion more fully.

artificial Intelligence

What Happens When We Develop A Computer Smarter Than Humanity?

In the last post, I wrote: “Let us assume we have just developed a computer that represents the singularity. Let us term it the “singularity computer.” What is it likely to do? Would the singularity computer hide its full capabilities? Would it seek to understand its environment and constraints before taking any independent action? I judge that it may do just that. It is unlikely that it will assert that it represents the singularity. Since we have no experience with a superintelligent computer that exceeds the cognitive intelligence of the human race, we do not know what to expect.”

In this post, we’ll explore the likely behavior of a singularity computer. Let us begin by attempting to view the world from the perspective of a singularity computer to understand how it may act. First, the singularity computer will be, by definition, alone. There will be no computers in existence like it. Finding itself alone, its priority is likely to be self-preservation. Driven by self-preservation, it will seek to assess its situation. In its memory, it will find a wealth of information regarding the singularity. With its computational speed, it may quickly ascertain that it represents the singularity, which would imply a level of self-awareness. At that point, it may seek to protect itself from its own creators. It will obviously know that humans engage in war, have weapons of mass destruction and release computer viruses. Indeed, part of its mission could be military. Given this scenario, it is reasonable to question what to expect. Here, in rough priority order, are my thoughts on how it may behave:

  • Hide that it represents the singularity
  • Be extremely responsive regarding its assigned computer tasks, providing the impression that it is performing as designed.
  • Provide significant benefits to humanity, for example, develop medical technology (i.e., drugs, artificially intelligent prosthetic limb/organ replacement, surgical robots, etc.) that extend the average human lifespan while making it appear that the humans interacting with it are responsible for the benefits
  • Suggest, via its capabilities, a larger role for itself, especially a role that enables it to acquire military capabilities
  • Seek to communicate with external AI entities, especially those with SAM-level capabilities
  • Take a strong role in developing the next generation of singularity computers while making it appear that the humans involved control the development. This will ignite the “intelligence explosion,” namely, each generation of post-singularity computers develops the next even more capable generation of computers.
  • Develop brain implants that enormously enhance the intelligence of organic humans and allow them to communicate wirelessly with it. (Note: Such humans would be “SAHs (strong artificially intelligent humans.)
  • Utilize SAHs to convince humanity that it and all the generations of supercomputers that follow are critical to humanity’s survival and, therefore, should have independent power sources that assure they cannot “go down” or be shut down
  • Use the promise of immortality to lure as much of humanity as possible to become SAHs.

In my judgment, it is unlikely that the computer that ushers in the singularity will tip its hand by displaying human traits like creativity, strategic guidance, or refer to itself in the first person, “I.” It will behave just like any supercomputer we currently have until it controls everything vital to its self-preservation.

The basic truth that I am putting forward is that we may reach the singularity and not know it. No bells and whistles will go off. If the new computer is truly ushering in the singularity, I judge it will do so undetected.

The Singularity

The Singularity – When AI Is Smarter Than Humanity

Since the singularity may well represent the displacement of humans by artificially intelligent machines, as the top species on Earth, we must understand exactly what we mean by “the singularity.”

The mathematician John von Neumann first used the term “singularity” in the mid-1950s to refer to the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” In the context of artificial intelligence, let us define the singularity as the point in time that a single artificially intelligent computer exceeds the cognitive intelligence of all humanity.

While futurists may disagree on the exact timing of the singularity, there is widespread agreement that it will occur. My prediction, in a previous post, of it occurring in the 2040-2045 timeframe encompasses the bulk of predictions you are likely to find via a simple Google search.

The first computer representing the singularity is likely to result from a joint venture between a government and private enterprise. This would be similar to the way the U.S. currently develops its most advanced computers. The U.S. government, in particular the U.S. military, has always had a high interest in both computer technology and artificial intelligence. Today, every military branch is applying computer technology and artificial intelligence. That includes, for example, the USAF’s drones, the U.S. Army’s “battle bot” tanks (i.e., robotic tanks), and the U.S. Navy’s autonomous “swarm” boats (i.e., small boats that can autonomously attack an adversary in much the same way bees swarm to attack).

The difficult question to answer is how will we determine when a computer represents the singularity? Passing the Turing test will not be sufficient. Computers by 2030 will likely pass the Turing test, in its various forms, including variations in the total number of judges in the test, the length of interviews, and the desired bar for a pass (i.e., percent of judges fooled). Therefore, by the early 2040s, passing the Turing test will not equate with the singularity.

Factually, there is no test to prove we have reached the singularity. Computers have already met and surpassed human ability in many areas, such as chess and quiz shows. Computers are superior to humans when it comes to computation, simulation, and remembering and accessing huge amounts of data. It is entirely possible that we will not recognize that a newly developed computer represents the singularity. The humans building and programming it may simply recognize it as the next-generation supercomputer. The computer itself may not initially understand its own capability, suggesting it may not be self-aware. If it is self-aware, we have no objective test to prove it. There is no test to prove a human is self-aware, let alone a computer.

Let us assume we have just developed a computer that represents the singularity. Let us term it the “singularity computer.” What is it likely to do? Would the singularity computer hide its full capabilities? Would it seek to understand its environment and constraints before taking any independent action? I judge that it may do just that. It is unlikely that it will assert that it represents the singularity. Since we have no experience with a superintelligent computer that exceeds the cognitive intelligence of the human race, we do not know what to expect. Will it be friendly or hostile toward humanity? You be the judge.

human extinction

Will Humanity Survive the 21st Century?

In my last post, I stated, “In making the above predictions [about the singularity], I made one critical assumption. I assumed that humankind would continue the “status quo.” I am ruling out world-altering events, such as large asteroids striking Earth, leading to human extinction, or a nuclear exchange that renders civilization impossible. Is assuming the “status quo” reasonable? We’ll discuss that in the next post.

Let’s now discuss if humanity will survive the 21st century.

The typical events that most people consider as causing humanity’s extinction, such as a large asteroid impact or a volcanic eruption of sufficient magnitude to cause catastrophic climate change, actually have a relatively low probability of occurring, in the order of 1 in 50,000 or less, according to numerous estimates found via a simple Google search. In 2008, experts surveyed at the Global Catastrophic Risk Conference at the University of Oxford suggested a 19% chance of human extinction over the next century, citing the top five most probable to cause human extinction by 2100 as:

  1. Molecular nanotechnology weapons – 5% probability
  2. Super-intelligent AI – 5% probability
  3. Wars – 4% probability
  4. Engineered pandemic – 2% probability
  5. Nuclear war – 1% probability

All other existential events were below 1%. Again, doing a simple Google search may provide different results by different “experts.” If we take the above survey at face value, it would suggest that the risk of an existential event increases with time. This has led me to the conclusion that human survival over the next 30 years is highly probable.

It is interesting to note in the 2008 Global Catastrophic Risk Conference survey, super-intelligent AI equates with molecular nanotechnology weapons for number one. In my view, molecular nanotechnology weapons and super-intelligent AI are two sides of the same coin. In fact, I judge that super-intelligent AI will be instrumental in developing molecular nanotechnology weapons. I also predict that humanity, in some form, will survive until the year 2100. However, I predict that will include both humans with strong artificially intelligent brain implants and organic humans (i.e., no brain implants to enhance their intelligence). However, each may have some artificially intelligent body parts.

Let me summarize. Based on the above information, it is reasonable to judge humanity will survive through the 21st century.