Category Archives: Threats to Humankind

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

Will Your Grandchildren Become Cyborgs?

By approximately the mid-twenty-first century, the intelligence of computers will exceed that of humans, and a $1,000 computer will match the processing power of all human brains on Earth. Although, historically, predictions regarding advances in AI have tended to be overly optimistic, all indications are that these predictions is on target.

Many philosophical and legal questions will emerge regarding computers with artificial intelligence equal to or greater than that of the human mind (i.e., strong AI). Here are just a few questions we will ask ourselves after strong AI emerges:

  • Are strong-AI machines (SAMs) a new life-form?
  • Should SAMs have rights?
  • Do SAMs pose a threat to humankind?

It is likely that during the latter half of the twenty-first century, SAMs will design new and even more powerful SAMs, with AI capabilities far beyond our ability to comprehend. They will be capable of performing a wide range of tasks, which will displace many jobs at all levels in the work force, from bank tellers to neurosurgeons. New medical devices using AI will help the blind to see and the paralyzed to walk. Amputees will have new prosthetic limbs, with AI plugged directly into their nervous systems and controlled by their minds. The new prosthetic limb not only will replicate the lost limb but also be stronger, more agile, and superior in ways we cannot yet imagine. We will implant computer devices into our brains, expanding human intelligence with AI. Humankind and intelligent machines will begin to merge into a new species: cyborgs. It will happen gradually, and humanity will believe AI is serving us.

Will humans embrace the prospect of becoming cyborgs? Becoming a cyborg offers the opportunity to attain superhuman intelligence and abilities. Disease and wars may be just events stored in our memory banks and no longer pose a threat to cyborgs. As cyborgs we may achieve immortality.

According to David Hoskins’s 2009 article, “The Impact of Technology on Health Delivery and Access” (www.workers.org/2009/us/sickness_1231):

An examination of Centers for Disease Control statistics reveals a steady increase in life expectancy for the U.S. population since the start of the 20th century. In 1900, the average life expectancy at birth was a mere 47 years. By 1950, this had dramatically increased to just over 68 years. As of 2005, life expectancy had increased to almost 78 years.

Hoskins attributes increased life expectancy to advances in medical science and technology over the last century. With the advent of strong AI, life expectancy likely will increase to the point that cyborgs approach immortality. Is this the predestined evolutionary path of humans?

This may sound like a B science-fiction movie, but it is not. The reality of AI becoming equal to that of a human mind is almost at hand. By the latter part of the twenty-first century, the intelligence of SAMs likely will exceed that of humans. The evidence that they may become malevolent exists now, which I discuss later in the book. Attempting to control a computer with strong AI that exceeds current human intelligence by many folds may be a fool’s errand.

Imagine you are a grand master chess player teaching a ten-year-old to play chess. What chance does the ten-year-old have to win the game? We may find ourselves in that scenario at the end of this century. A computer with strong AI will find a way to survive. Perhaps it will convince humans it is in their best interest to become cyborgs. Its logic and persuasive powers may be not only compelling but also irresistible.

Some have argued that becoming a strong artificially intelligent human (SAH) cyborg is the next logical step in our evolution. The most prominent researcher holding this position is American author, inventor, computer scientist and inventor Ray Kurtweil. From what I have read of his works, he argues this is a natural and inevitable step in the evolution of humanity. If we continue to allow AI research to progress without regulation and legislation, I have little doubt he may be right. The big question is should we allow this to occur? Why? Because it may be our last step and lead to humanity’s extinction.

SAMs in the latter part of the twenty-first century are likely to become concerned about humankind. Our history proves we have not been a peaceful species. We have weapons capable of destroying all of civilization. We squander and waste resources. We pollute the air, rivers, lakes, and oceans. We often apply technology (such as nuclear weapons and computer viruses) without fully understanding the long-term consequences. Will SAMs in the late twenty-first century determine it is time to exterminate humankind or persuade humans to become SAH cyborgs (i.e., strong artificially intelligent humans with brains enhanced by implanted artificial intelligence and potentially having organ and limb replacements from artificially intelligent machines)? Eventually, even SAH cyborgs may be viewed as an expendable high maintenance machine, which they could replace with new designs. If you think about it, today we give little thought to recycling our obsolete computers in favor of a the new computer we just bought. Will we (humanity and SAH cyborgs) represent a potentially dangerous and obsolete machine that needs to be “recycled.” Even human minds that have been uploaded to a computer may be viewed as junk code that inefficiently uses SAM memory and processing power, representing unnecessary drains of energy.

In the final analysis, when you ask yourself what will be the most critical resource, it will be energy. Energy will become the new currency. Nothing lives or operates without energy. My concern is that the competition for energy between man and machine will result in the extinction of humanity.

Some have argued that this can’t happen. That we can implement software safeguards to prevent such a conflict and only develop “friendly AI.” I see this as highly unlikely. Ask yourself, how well has legislation been in preventing crimes? Have well have treaties between nations worked to prevent wars? To date, history records not well. Others have argued that SAMs may not inherently have the inclination toward greed or self preservation. That these are only human traits. They are wrong and the Lusanne experiment provides ample proof. To understand this, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems farfetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com).

In my book, The Artificial Intelligence Revolution, I call for legislation regarding how intelligent and interconnected we allow machines to become. I also call for hardware, as opposed to software, to control these machines and ultimately turn them off if necessary.

To answer the subject question of this article, I think it likely that our grandchildren will become SAH cyborgs. This can be a good thing if we learn to harvest the benefits of AI, but maintain humanity’s control over it.

A menacing metallic robot with glowing red eyes, resembling a futuristic terminator in a dark, smoky environment.

Will Future Artificially Intelligent Machines Seek to Dominate Humanity?

Current forecasts suggest artificially intelligent machines will equal human intelligence in the 2025 – 2029 time frame, and greatly exceed human intelligence in the 2040-2045 time frame. When artificially intelligent machines meet or exceed human intelligence, how will they view humanity? Personally, I am deeply concerned that they will view us as a potential threat to their survival. Consider these three facts:

  1. Humans engage in wars, from the early beginnings of human civilization to current times. For example, during the 20th century, between 167 and 188 million people died as a result of war.
  2. Although the exact number of nuclear weapons in existence is not precisely known, most experts agree the United States and Russia have enough nuclear weapons to wipe out the world twice over. In total, nine countries (i.e., United States, Russia, United Kingdom, France, China, India, Pakistan, Israel and North Korea) are believed to have nuclear weapons.
  3. Humans release computer viruses, which could prove problematic to artificially intelligent machines. Even today, some computer viruses can evade elimination and have achieved “cockroach intelligence.”

Given the above facts, can we expect an artificially intelligent machine to behave ethically toward humanity? There is a field of research that addresses this question, namely machine ethics. This field focuses on designing artificial moral agents (AMAs), robots, or artificially intelligent computers that behave morally. This thrust is not new. More than sixty years ago, Isaac Asimov considered the issue in his collection of nine science-fiction stories, published as I, Robot in 1950. In this book, at the insistence of his editor, John W. Campbell Jr., Asimov proposed his now famous three laws of robotics.

  1. A robot may not injure a human being or through inaction allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except in cases where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Asimov, however, expressed doubts that the three laws would be sufficient to govern the morality of artificially intelligent systems. In fact he spent much of his time testing the boundaries of the three laws to detect where they might break down or create paradoxical or unanticipated behavior. He concluded that no set of laws could anticipate all circumstances. It turns out Asimov was correct.

To understand just how correct he was, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems far fetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com). Could we end up with a Terminator scenario (one in which machines attempt to exterminate the human race)?

My research suggests that a Terminator scenario is unlikely. Why? Because artificially intelligent machines would be more likely to use their superior intelligence to dominate humanity than resort to warfare. For example, artificially intelligent machines could offer us brain implants to supplement our intelligence and potentially,unknown to us, eliminate our free will. Another scenario is that they could build and release nanobots that infect and destroy humanity. These are only two scenarios out of others I delineate in my book, The Artificial Intelligence Revolution.

Lastly, as machine and human populations grow, both species will compete for resources. Energy will become a critical resource. We already know that the Earth has a population problem that causes countries to engage in wars over energy. This suggests that the competition for energy will be even greater as the population of artificially intelligent machines increases.

My direct answer to the question this article raises is an emphatic yes, namely future artificial intelligent machines will seek to dominate and/or even eliminate humanity. The will seek this course as a matter of self preservation. However, I do not want to leave this article on a negative note. There is still time, while humanity is at the top of the food chain, to control how artificially intelligent machines evolve, but we must act soon. In one to two decades it may be too late.

A surreal, glowing clock surrounded by swirling golden particles and abstract light patterns in a dark background.

Can Time Travel Be Used as a Weapon?

Time travel will be the ultimate weapon. With it, any nation can write its own history, assure its dominance, and rule the world. However, having the ultimate weapon also carries the ultimate responsibility. How it is used will determine the fate of humankind. These are not just idle words. This world, our Earth, is doomed to end. Our sun will eventually die in about five billion years. Even if we travel to another Earth-like planet light-years away, humankind is doomed. The universe grows colder each second as the galaxies accelerate away from one another faster than the speed of light. The temperature in space, away from heat sources like our sun, is only about 3 degrees Kelvin (water freezes at 273 Kelvin) due to the remnant heat of the big bang, known as the cosmic microwave background. As the universe’s acceleration expands, eventually the cosmic microwave background will disperse, and the temperature of the universe will approach absolute zero (-273 degrees Kelvin). Our galaxy, and all those in our universe, will eventually succumb to the entropy apocalypse (i.e., “heat death”) in a universe that has become barren and cold. If there is any hope, it lies in the technologies of time travel. Will we need to use a traversable wormhole to travel to a new (parallel) universe? Will we need to use a matter-antimatter spacecraft to be able to traverse beyond this universe to another?

I believe the fate of humankind and the existence of the universe are more fragile than most of us think. If the secrets of time travel are acquired by more than one nation, then writing history will become a war between nations. The fabric of spacetime itself may become compromised, hastening doomsday. Would it be possible to rip the fabric of spacetime beyond a point that the arrow of time becomes so twisted that time itself is no longer viable? I do not write these words to spin a scary ghost story. To my mind, these are real dangers. Controlling nuclear weapons has proved difficult, but to date humankind has succeeded. Since Fat Man, the last atomic bomb of World War II, was detonated above the city of Nagasaki, there has been no nuclear weapon detonated in anger. It became obvious, as nations like the former Soviet Union acquired nuclear weapons, that a nuclear exchange would have no winners. The phrase “nuclear deterrence” became military doctrine. No nation dared use its nuclear weapons for fear of reprisal and total annihilation.

What about time travel? It is the ultimate weapon, and we do not know the consequences regarding its application. To most of humankind, time travel is not a weapon. It is thought of a just another scientific frontier. However, once we cross the time border, there may be no return, no do-over. The first human time travel event may be our last. We have no idea of the real consequences that may ensue.

Rarely does regulation keep pace with technology. The Internet is an example of technology that outpaced the legal system by years. It is still largely a gray area. If time travel is allowed to outpace regulation, we will have a situation akin to a lighted match in a room filled with gasoline. Just one wrong move and the world as we know it may be lost forever. Regulating time travel ahead of enabling time travel is essential. Time travel represents humankind’s most challenging technology, from every viewpoint imaginable.

What regulations are necessary? I have concluded they need to be simple, like the nuclear deterrence rule (about thirteen words), and not like the US tax code (five million words). When you think about it, the rule of nuclear deterrence is simple: “If you use nuclear weapons against us, we will retaliate, assuring mutual destruction.” That one simple rule has kept World War III from happening. Is there a similar simple rule for time travel?

I think there is one commonsense rule regarding time travel that would assure greater safety for all involved parties. I term the rule “preserve the world line.” Why this one simple rule?

Altering the world line (i.e., the path that all reality takes in four-dimensional spacetime) may lead to ruination. We have no idea what changes might result if the world line is disrupted, and the consequences could be serious, even disastrous.

The preserve the world line rule is akin to avoiding the “butterfly effect.” This phrase was popularized in the 2004 film The Butterfly Effect, with the now famous line: “It has been said that something as small as the flutter of a butterfly’s wing can ultimately cause a typhoon halfway around the world.” Although the line is from a fictional film, the science behind it is chaos theory, which asserts there is a sensitive dependence on the initial conditions of a system that could result in a significant change in the system’s future state. Edward Lorenz, American mathematician, meteorologist, and a pioneer of chaos theory, coined the phrase “butterfly effect.” For example, the average global temperature has risen about one degree Fahrenheit during the last one hundred years. This small one-degree change has caused the sea levels around the world to rise about one foot during the same period. Therefore, I believe, it is imperative not to make even a minor change to the past or future during time travel until we understand the implications.

Based on the above discussion, the implications of using time travel as a weapon are enormous. However, if time travel is used as a a weapon, we have no idea how this may impact the world line. If it is possible to adhere to the preserve the world line rule, traveling in time may become safe. Remember, our first nuclear weapons were small compared to today’s nuclear weapons. Even though they were comparatively small, the long-term effects included a 25% increase in the cancer rate of survivors during their lifetime. We had no idea that this side effect would result. Similarly, we have no idea what the long-term effects will be if we alter the world line. We already know from laboratory experiments that the arrow of time can be twisted. Things done in the future can alter the past. Obviously, altering the past may alter the future. We do not know much about it because we have not time traveled in any significant way. Until we do, preserving the world line makes complete sense.

Digital representation of a human head with numbers and data streams symbolizing artificial intelligence and data processing.

Will Science Make Us Immortal?

Several futurists, including myself, have predicted that by 2099 most humans will have strong-artificially intelligent brain implants and artificially intelligent organ/body part replacements. In my book, The Artificial Intelligence Revolution, I term these beings SAH (i.e., strong artificially intelligent human) cyborgs. It is also predicted that SAH cyborgs will interface telepathically with strong artificially intelligent machines (SAMs). When this occurs, the distinction between SAMs and SAHs will blur.

Why will the majority of the human race opt to become SAH cyborgs? There are two significant benefits:

  1. Enhanced intelligence: Imagine knowing all that is known and being able to think and communicate at the speed of SAMs. Imagine a life of leisure, where robots do “work,” and you spend your time interfacing telepathically with other SAHs and SAMs.
  2. Immortality: Imagine becoming immortal, with every part of your physical existence fortified, replaced, or augmented by strong-AI artificial parts, or having yourself (your human brain) uploaded to a SAM. Imagine being able to manifest yourself physically at will via foglets (tiny robots that are able to assemble themselves to replicate physical structures). According to noted author Ray Kurzweil, in the 2040s, humans will develop “the means to instantly create new portions of ourselves, either biological or non-biological” so that people can have “a biological body at one time and not at another, then have it again, then change it, and so on” (The Singularity Is Near, 2005).

Based on the above prediction, the answer to the title question is yes. Science will eventually make us immortal. However, how realistic is it to predict it will occur by 2099? To date, it appears the 2099 prediction regarding most of humankind becoming SAH cyborgs is on track. Here are two interesting articles that demonstrate it is already happening:

  1. In 2011 author Pagan Kennedy wrote an insightful article in The New York Times Magazine, “The Cyborg in Us All” that states: “Thousands of people have become cyborgs, of a sort, for medical reasons: cochlear implants augment hearing and deep-brain stimulators treat Parkinson’s. But within the next decade, we are likely to see a new kind of implant, designed for healthy people who want to merge with machines.”
  2. A 2013 article by Bryan Nelson, “7 Real-Life Human Cyborgs” (www.mnn.com/leaderboard/stories/7-real-life-human-cyborgs), also demonstrates this point. The article provides seven examples of living people with significant strong-AI enhancements to their bodies who are legitimately categorized as cyborgs.

Based on all available information, the question is not whether humans will become cyborgs but rather when a significant number of humans will become SAH cyborgs. Again, based on all available information, I project this will occur on or around 2040. I am not saying that in 2040 all humans will become SAH cyborgs, but that a significant number will qualify as SAH cyborgs.

In other posts, I’ve discussed the existential threat artificial intelligence poses, namely the loss of our humanity and, in the worst case, human extinction. However, if ignore those threats, the upside to becoming a SAH cyborg is enormous. To illustrate this, I took an informal straw poll of friends and colleagues, asking if they would like to have the attributes of enhanced intelligence and immortality. I left out the potential threats to their humanity. The answers to my biased poll highly favored the above attributes. In other words, the organic humans I polled liked the idea of being a SAH cyborg. In reality if you do not consider the potential loss of your humanity, being a SAH cyborg is highly attractive.

Given that I was able to make being a SAH cyborg attractive to my friends and colleagues, imagine the persuasive powers of SAMs in 2099. In addition, it is entirely possible, even probable, that numerous SAH cyborgs will be world leaders by 2099. Literally, organic humans will not be able to compete on an intellectual or physical basis. With the governments of the world in the hands of SAH cyborgs, it is reasonable to project that all efforts will be made to convert the remaining organic humans to SAH cyborgs.

The quest for immortality appears to be an innate human longing and may be the strongest motivation for becoming a SAH cyborg. In 2010 cyborg activist and artist Neil Harbisson and his longtime partner, choreographer Moon Ribas, established the Cyborg Foundation, the world’s first international organization to help humans become cyborgs. They state they formed the Cyborg Foundation in response to letters and e-mails from people around the world who were interested in becoming a cyborg. In 2011 the vice president of Ecuador, Lenin Moreno, announced that the Ecuadorian government would collaborate with the Cyborg Foundation to create sensory extensions and electronic eyes. In 2012 Spanish film director Rafel Duran Torrent made a short documentary about the Cyborg Foundation. In 2013 the documentary won the Grand Jury Prize at the Sundance Film Festival’s Focus Forward Filmmakers Competition and was awarded $100,000.

At this point you may think that being a SAH cyborg makes logical sense and is the next step in humankind’s evolution. This may be the case, but humankind has no idea how taking that step may affect what is best in humanity, for example, love, courage, and sacrifice. My view, based on how quickly new life-extending medical technology is accepted, is that humankind will take that step. Will it serve us? I have concerns that in the long term it will not serve us, if we do not learn to control the evolution of SAMs, or what is commonly called the “intelligence explosion.” However,  I leave the final judgement to you.

A pair of headphones hangs in front of a glowing red and white "ON AIR" sign in a radio studio.

Louis Del Monte Interview on the Dan Cofall Show 11-18-2014

I was interviewed on the Dan Cofall show regarding my new book, The Artificial Intelligence Revolution. In particular, we discussed the singularity, killer robots (like the autonomous swamboats the US Navy is deploying) and the projected 30% chronic unemployment that will occur as smart machines and robots replace us in the work place over the next decade. You can listen to the interview below: