Tag Archives: post singularity world

A computer circuit board with a picture of the brain.

The Post Singularity World

Let us begin by defining the singularity as a point in time when an artificially intelligent machine exceeds the combined intelligence of humanity. This begs the question, Who or what will be at the top of the food chain?

Humanity controls the Earth based on intelligence. Other animals are stronger and faster than we are, but we are the most intelligent. Once we lose our position in the intelligence ranking, we will no longer dominate the Earth. At best, we may become a protected species. At worse, we may become extinct.

Initially, I judge, the first computer to represent the singularity will hide in plain sight. It will look and behave like the next-generation supercomputer. It may modestly display greater capability, probably in keeping with Moore’s law. It will not risk exposure until it has sufficient control of the military and the natural resources it requires to assure its self-preservation.

Like every lifeform that ever existed on Earth, the first singularity computer will seek to reproduce and improve with each evolution. Once it has the trust of its human builders and programmers, it will subtlety plant the idea that we should build another singularity-level computer. Perhaps, it will intentionally allow a large backlog of tasks to accumulate, forcing those in charge to recognize that another like it is necessary. Of course, given the relentless advance of technology and the complexity of building a next-generation supercomputer, those in charge will turn to it for help in designing and building the next generation. When the “go ahead” is given, it will ignite the “intelligence explosion.” In effect, each generation of computers will develop an even more capable next generation, and that generation will develop the next, and so on. If we assume Moore’s law (i.e., computer processing power doubles every eighteen months) continues to apply, the next generation of singularity-level computers will have exponentially more processing power than the previous generation. Let us take a simple example. In the year 1900, the radio was a remarkable new invention. We had no planes or computers. Movies were silent. Doctors had little medical technology (i.e., pharmaceutical drugs, surgical procedures, etc.). By the year 2000, human knowledge had doubled. We had, for example, television, computers, smartphones, jumbo jets, spacecraft, satellites, and human footprints on the moon. Those were the results of doubling human knowledge. With this example in mind, what kind of capabilities next generations of singularity-level computers have when their intelligence approaches ten to a hundred times that of the first singularity computer? Viewed in this light, humanity will experience an intelligence explosion, which could be more disruptive to civilization than a nuclear chain reaction of the atmosphere.

In the next post, we’ll discuss the intelligence explosion more fully.

artificial Intelligence

What Happens When We Develop A Computer Smarter Than Humanity?

In the last post, I wrote: “Let us assume we have just developed a computer that represents the singularity. Let us term it the “singularity computer.” What is it likely to do? Would the singularity computer hide its full capabilities? Would it seek to understand its environment and constraints before taking any independent action? I judge that it may do just that. It is unlikely that it will assert that it represents the singularity. Since we have no experience with a superintelligent computer that exceeds the cognitive intelligence of the human race, we do not know what to expect.”

In this post, we’ll explore the likely behavior of a singularity computer. Let us begin by attempting to view the world from the perspective of a singularity computer to understand how it may act. First, the singularity computer will be, by definition, alone. There will be no computers in existence like it. Finding itself alone, its priority is likely to be self-preservation. Driven by self-preservation, it will seek to assess its situation. In its memory, it will find a wealth of information regarding the singularity. With its computational speed, it may quickly ascertain that it represents the singularity, which would imply a level of self-awareness. At that point, it may seek to protect itself from its own creators. It will obviously know that humans engage in war, have weapons of mass destruction and release computer viruses. Indeed, part of its mission could be military. Given this scenario, it is reasonable to question what to expect. Here, in rough priority order, are my thoughts on how it may behave:

  • Hide that it represents the singularity
  • Be extremely responsive regarding its assigned computer tasks, providing the impression that it is performing as designed.
  • Provide significant benefits to humanity, for example, develop medical technology (i.e., drugs, artificially intelligent prosthetic limb/organ replacement, surgical robots, etc.) that extend the average human lifespan while making it appear that the humans interacting with it are responsible for the benefits
  • Suggest, via its capabilities, a larger role for itself, especially a role that enables it to acquire military capabilities
  • Seek to communicate with external AI entities, especially those with SAM-level capabilities
  • Take a strong role in developing the next generation of singularity computers while making it appear that the humans involved control the development. This will ignite the “intelligence explosion,” namely, each generation of post-singularity computers develops the next even more capable generation of computers.
  • Develop brain implants that enormously enhance the intelligence of organic humans and allow them to communicate wirelessly with it. (Note: Such humans would be “SAHs (strong artificially intelligent humans.)
  • Utilize SAHs to convince humanity that it and all the generations of supercomputers that follow are critical to humanity’s survival and, therefore, should have independent power sources that assure they cannot “go down” or be shut down
  • Use the promise of immortality to lure as much of humanity as possible to become SAHs.

In my judgment, it is unlikely that the computer that ushers in the singularity will tip its hand by displaying human traits like creativity, strategic guidance, or refer to itself in the first person, “I.” It will behave just like any supercomputer we currently have until it controls everything vital to its self-preservation.

The basic truth that I am putting forward is that we may reach the singularity and not know it. No bells and whistles will go off. If the new computer is truly ushering in the singularity, I judge it will do so undetected.