Tag Archives: The Artificial Intelligence Revolution

A cell phone and computer are connected to each other.

The Artificial Intelligence Revolution – Introduction

This excerpt is the introduction from my book, The Artificial Intelligence Revolution. Enjoy!

This book is a warning. Through this medium, I am shouting, “The singularity is coming.” The singularity (as first described by John von Neumann in 1955) represents a point in time when intelligent machines will greatly exceed human intelligence. It is, in the way of analogy, the start of World War III. The singularity has the potential to set off an intelligence explosion that can wield devastation far greater than nuclear weapons. The message of this book is simple but critically important. If we do not control the singularity, it is likely to control us. Our best artificial intelligence (AI) researchers and futurists cannot accurately predict what a post-singularity world may look like. However, almost all AI researchers and futurists agree it will represent a unique point in human evolution. It may be the best step in the evolution of humankind or the last step. As a physicist and futurist, I believe humankind will be better served if we control the singularity, which is why I wrote this book.

Unfortunately, the rise of artificial intelligence has been almost imperceptible. Have you noticed the word “smart” being used to describe machines? Often “smart” means “artificial intelligence.” However, few products are being marketed with the phrase “artificial intelligence.” Instead, they are called “smart.” For example, you may have a “smart” phone. It does not just make and answer phone calls. It will keep a calendar of your scheduled appointments, remind you to go to them, and give you turn-by-turn driving directions to get there. If you arrive early, the phone will help you pass the time while you wait. It will play games with you, such as chess, and depending on the level of difficulty you choose, you may win or lose the game. In 2011 Apple introduced a voice-activated personal assistant, Siri, on its latest iPhone and iPad products. You can ask Siri questions, give it commands, and even receive responses. Smartphones appear to increase our productivity as well as enhance our leisure. Right now, they are serving us, but all that may change.

A smartphone is an intelligent machine, and AI is at its core. AI is the new scientific frontier, and it is slowly creeping into our lives. We are surrounded by machines with varying degrees of AI, including toasters, coffeemakers, microwave ovens, and late-model automobiles. If you call a major pharmacy to renew a prescription, you likely will never talk with a person. The entire process will occur with the aid of a computer with AI and voice synthesis.

The word “smart” also has found its way into military phrases, such as “smart bombs,” which are satellite-guided weapons such as the Joint Direct Attack Munition (JDAM) and the Joint Standoff Weapon (JSOW). The US military always has had a close symbiotic relationship with computer research and its military applications. In fact, the US Air Force, starting in the 1960s, has heavily funded AI research. Today the air force is collaborating with private industry to develop AI systems to improve information management and decision making for its pilots. In late 2012 the science website www.phys.org reported a breakthrough by AI researchers at Carnegie Mellon University. Carnegie Mellon researchers, funded by the US Army Research Laboratory, developed an AI surveillance program that can predict what a person “likely” will do in the future by using real-time video surveillance feeds. This is the premise behind the CBS television program Person of Interest.

AI has changed the cultural landscape. Yet, the change has been so gradual that we hardly have noticed the major impact it has. Some experts, such as Ray Kurzweil, an American author, inventor, futurist, and the director of engineering at Google, predicted that in about fifteen years, the average desktop computer would have a mind of its own, literally. This computer will be your intellectual equal and will even have a unique personality. It will be self-aware. Instead of just asking simple questions about the weather forecast, you may be confiding your deepest concerns to your computer and asking it for advice. It will have migrated from personal assistant to personal friend. You likely will give it a name, much in the same way we name our pets. You will be able to program its personality to have interests similar to your own. It will have face-recognition software, and it will recognize you and call you by name, similar to the computer HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey. The conversations between you and your “personal friend” will appear completely normal. Someone in the next room who is not familiar with your voice will not be able to tell which voice belongs to the computer and which voice belongs to you.

By approximately the mid-twenty-first century, Kurzweil predicts that computers’ intelligence will exceed that of humans, and a $1,000 computer will match the processing power of all human brains on Earth. Although historically, predictions regarding advances in AI have tended to be overly optimistic, all indications are that Kurzweil is on target.

Many philosophical and legal questions will emerge regarding computers with artificial intelligence equal to or greater than that of the human mind (i.e., strong AI). Here are just a few questions we will ask ourselves after strong AI emerges:

  • Are strong-AI machines (SAMs) a new life-form?
  • Should SAMs have rights?
  • Do SAMs pose a threat to humankind?

It is likely that during the latter half of the twenty-first century, SAMs will design new and even more powerful SAMs with AI capabilities far beyond our ability to comprehend. They will perform a wide range of tasks, which will displace many jobs at all levels in the workforce, from bank tellers to neurosurgeons. New medical devices using AI will help the blind to see and the paralyzed to walk. Amputees will have new prosthetic limbs, with AI plugged directly into their nervous systems and controlled by their minds. The new prosthetic limb not only will replicate the lost limb but also be stronger, more agile, and superior in ways we cannot yet imagine. We will implant computer devices into our brains, expanding human intelligence with AI. Humankind and intelligent machines will begin to merge into a new species: cyborgs. It will happen gradually, and humanity will believe AI is serving us.

Computers with strong AI in the late twenty-first century, however, may see things differently. We may appear to those machines much the same way bees in a beehive appear to us today. We know we need bees to pollinate crops, but we still consider bees insects. We use them in agriculture, and we gather their honey. Although bees are essential to our survival, we do not offer to share our technology with them. If wild bees form a beehive close to our home, we may become concerned and call an exterminator.

Will the SAMs in the latter part of the twenty-first century become concerned about humankind? Our history proves we have not been a peaceful species. We have weapons capable of destroying all of civilization. We squander and waste resources. We pollute the air, rivers, lakes, and oceans. We often apply technology (such as nuclear weapons and computer viruses) without fully understanding the long-term consequences. Will SAMs in the late twenty-first century determine it is time to exterminate humankind or persuade humans to become cyborgs (i.e., humans with brains enhanced by implanted artificial intelligence and potentially having organ and limb replacements from artificially intelligent machines)? Will humans embrace the prospect of becoming cyborgs? Becoming a cyborg offers the opportunity to attain superhuman intelligence and abilities. Disease and wars may be just events stored in our memory banks and no longer threaten cyborgs. As cyborgs, we may achieve immortality.

According to David Hoskins’s 2009 article, “The Impact of Technology on Health Delivery and Access” (www.workers.org/2009/us/sickness_1231):

 

An examination of Centers for Disease Control statistics reveals a steady increase in life expectancy for the U.S. population since the start of the 20th century. In 1900, the average life expectancy at birth was a mere 47 years. By 1950, this had dramatically increased to just over 68 years. As of 2005, life expectancy had increased to almost 78 years.

 

Hoskins attributes increased life expectancy to advances in medical science and technology over the last century. With the advent of strong AI, life expectancy likely will increase to the point that cyborgs approach immortality. Is this the predestined evolutionary path of humans?

This may sound like a B science-fiction movie, but it is not. The reality of AI becoming equal to that of a human mind is almost at hand. By the latter part of the twenty-first century, the intelligence of SAMs likely will exceed that of humans. The evidence that they may become malevolent exists now, which I discuss later in the book. Attempting to control a computer with strong AI that exceeds current human intelligence by many folds may be a fool’s errand.

Imagine you are a grandmaster chess player teaching a ten-year-old to play chess. What chance does the ten-year-old have to win the game? We may find ourselves in that scenario at the end of this century. A computer with strong AI will find a way to survive. Perhaps it will convince humans it is in their best interest to become cyborgs. Its logic and persuasive powers may not only compelling but also irresistible.

Artificial intelligence is an embryonic reality today, but it is improving exponentially. By the end of the twenty-first century, we will have only one question regarding artificial intelligence: Will it serve us or replace us?

A white military drone equipped with missiles flying against a clear sky.

The Robot Wars Are Coming

When I say “the robot wars are coming,” I am referring to the increase in the US Department of Deference’s use of robotic systems and artificial intelligence in warfare.

Recently, September 12, 2014, the US Department of Defense released a report, DTP 106: Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions. Its authors, James Kadtke and Linton Wells II, delineate the potential benefits and concerns of Robotics, Artificial Intelligence and associated technologies, as they relate to the future of warfare, stating: “This paper examines policy, legal, ethical, and strategy implications for national security of the accelerating science, technology, and engineering (ST&E) revolutions underway in five broad areas: biology, robotics, information, nanotechnology, and energy (BRINE), with a particular emphasis on how they are interacting. The paper considers the time frame between now and 2030 but emphasizes policy and related choices that need to be made in the next few years.” Their  conclusions were shocking:

  • They express concerns about maintaining the US Department of Defense’s present technological preeminence, as other nations and companies in the private sector take the lead in developing robotics, AI and human augmentation such as exoskeletons.
  • They warn that “The loss of domestic manufacturing capability for cutting-edge technologies means the United States may increasingly need to rely on foreign sources for advanced weapons systems and other critical components, potentially creating serious dependencies. Global supply chain vulnerabilities are already a significant concern, for example, from potential embedded “kill switches,” and these are likely to worsen.”
  • The most critical concern they express, in my view, is “In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries, although the political and social ramifications of such systems will likely be significant. One negative aspect of these trends, however, lies in the risks that are possible due to unforeseen vulnerabilities that may arise from the large scale deployment of smar automated systems, for which there is little practical experience. An emerging risk is the ability of small scale or terrorist groups to design and build functionally capable unmanned systems which could perform a variety of hostile missions.”

It becomes obvious by reading this report and numerous similar reports, that the face of warfare is rapidly changing. It’s hard to believe we’ve come to this point, if you consider that 15 years ago Facebook and Twitter did not exist and Google was just getting started. However, even 15 years ago, drones played a critical role in warfare. For example, it was a Predator mission that located Osama bin Laden in Afghanistan in 2000. While drones were used as early as World War II for surveillance, it wasn’t until 2001 that missile-equipped drones were completed with the deployment of Predators drones, armed with Hellfire missiles. Today, one in every three fighter planes is a drone. How significant is this change? According to Richard Pildes, a professor of constitutional law at New York University’s School of Law, “Drones are the most discriminating use of force that has ever been developed. The key principles of the laws of war are necessity, distinction and proportionality in the use of force. Drone attacks and targeted killings serve these principles better than any use of force that can be imagined.”

Where is this all headed? Within the near future, the US military will deploy completely autonomy “Kill Bots.” There are robots that are programmed to engage and destroy the enemy without human oversight or control. Science fiction? No! According a 2014 media release from officials at the Office of Naval Research (ONR), a technological breakthrough will allow any unmanned surface vehicle (USV) to not only protect Navy ships, but also, for the first time, autonomously “swarm” offensively on hostile vessels. In my opinion, autonomous Predator drones are likely either being developed or have been developed, but the information remains classified.

Artificial intelligence and robotic systems are definitely changing the face of warfare. Within a decade, I judge, based on the current trends, that about half of the offensive capability of the US Department of Deference will consist of Kill Bots in one form or another, and a large percentage of them will be autonomous.

This suggest two things to me regarding the future of warfare:

  1. Offensively fighting wars will become more palatable to the US public because machines, not humans, will perform the lion’s share of the most dangerous missions.
  2. US adversaries are also likely to use Kill Bots against us, as adversarial nations develop similar technology.

This has prompted a potential United Nations moratorium on autonomous weapons systems. To quote the US DOD report DTP 106, “Perhaps the most serious issue is the possibility of robotic systems that can autonomously decide when to take human life. The specter of Kill Bots waging war without human guidance or intervention has already sparked significant political backlash, including a potential United Nations moratorium on autonomous weapons systems. This issue is particularly serious when one considers that in the future, many countries may have the ability to manufacture, relatively cheaply, whole armies of Kill Bots that could autonomously wage war. This is a realistic possibility because today a great deal of cutting-edge research on robotics and autonomous systems is done outside the United States, and much of it is occurring in the private sector, including DIY robotics communities. The prospect of swarming autonomous systems represents a challenge for nearly all current weapon systems.”

There is no doubt that the robot wars are coming. The real question is: Will humanity survive the robot wars?

 

 

 

 

 

Abstract digital illustration of a glowing microchip with data streams and blue light effects.

Will Artificial Intelligence Result in the Merger of Man and Machine?

Will humankind’s evolution merge with strong artificially intelligent machines (SAMs)? While no one really knows the answer to this question, many who are engaged in the development of artificial intelligence assert the merger will occur. Let’s understand what this means and why it is likely to occur.

While humans have used artificial parts for centuries (such as wooden legs), generally they still consider themselves human. The reason is simple: Their brains remain human. Our human brains qualify us as human beings. However,  by 2099 most humans will have strong-AI brain implants and interface telepathically with SAMs. This means the distinction between SAMs and humans with strong-AI brain implants, or what is termed “strong artificially intelligent humans” (i.e., SAH cyborgs), will blur. There is a strong probability, when this occurs, humans with strong-AI brain implants will identify their essence with SAMs. These cyborgs (strong-AI humans with cybernetically enhanced bodies), SAH cyborgs, represent a potential threat to humanity, which we’ll discuss below. It is unlikely that organic humans will be able to intellectually comprehend this new relationship and interface meaningfully (i.e., engage in dialogue) with either SAMs or SAHs.

Let us try to understand the potential threats and benefits related to what becoming a SAH cyborg represents. In essence, the threats are the potential extinction of organic humans, slavery of organic humans, and loss of humanity (strong-AI brain implants may cause SAHs to identify with intelligent machines, not organic humans, as mentioned above). Impossible? Unlikely? Science fiction? No! Let understand first why organic humans may choose to become SAH cyborgs.

There are significant benefits to becoming a SAH cyborg, including:

  • Enhanced intelligence: Imagine knowing all that is known and being able to think and communicate at the speed of SAMs. Imagine a life of leisure, where robots do “work,” and you spend your time interfacing telepathically with other SAHs and SAMs.
  • Immortality: Imagine becoming immortal, with every part of your physical existence fortified, replaced, or augmented by strong-AI artificial parts, or having yourself (your human brain) uploaded to a SAM. Imagine being able to manifest yourself physically at will via foglets (tiny robots that are able to assemble themselves to replicate physical structures). In my book, The Artificial Intelligent Revolution, I delineate the technology trends that suggests by the 2040s humans will develop the means to instantly create new portions of ourselves, either biological or non-biological, so that people can have a physical body at one time and not at another, as they choose.

To date, predictions regarding regarding most of humankind becoming SAH cyborgs by 2099 is on track to becoming a reality. An interesting 2013 article by Bryan Nelson, “7 Real-Life Human Cyborgs” (www.mnn.com/leaderboard/stories/7-real-life-human-cyborgs), demonstrates this point. The article provides seven examples of living people with significant strong-AI enhancements to their bodies who are legitimately categorized as cyborgs. In addition, in 2011 author Pagan Kennedy wrote an insightful article in The New York Times Magazine, “The Cyborg in Us All” that states: “Thousands of people have become cyborgs, of a sort, for medical reasons: cochlear implants augment hearing and deep-brain stimulators treat Parkinson’s. But within the next decade, we are likely to see a new kind of implant, designed for healthy people who want to merge with machines.”

Based on all available information, the question is not whether humans will become cyborgs but rather when a significant number of humans will become SAH cyborgs. Again, based on all available information, I believe this will begin to significantly occur the 2040. I am not saying that in 2040 all humans will become SAH cyborgs but that a significant number will qualify as SAH cyborgs. I do predict, along with other AI futurists, that by 2099 most humans in technologically advanced nations will become SAH cyborgs. I also predict the leaders of many of those nations will be SAH cyborgs. The reasoning behind my last prediction is simple. SAH cyborgs will be intellectually and physically superior to organic humans in every regard. In effect, they will be the most qualified to assume leadership positions.

The quest for immortality appears to be an innate human longing and may be the strongest motivation for becoming a SAH cyborg. In 2010 cyborg activist and artist Neil Harbisson and his longtime partner, choreographer Moon Ribas, established the Cyborg Foundation, the world’s first international organization to help humans become cyborgs. They state they formed the Cyborg Foundation in response to letters and e-mails from people around the world who were interested in becoming a cyborg. In 2011 the vice president of Ecuador, Lenin Moreno, announced that the Ecuadorian government would collaborate with the Cyborg Foundation to create sensory extensions and electronic eyes. In 2012 Spanish film director Rafel Duran Torrent made a short documentary about the Cyborg Foundation. In 2013 the documentary won the Grand Jury Prize at the Sundance Film Festival’s Focus Forward Filmmakers Competition and was awarded $100,000.

At this point you may think that being a SAH cyborg makes logical sense and is the next step in humankind’s evolution. This may be the case, but humankind has no idea how taking that step may affect what is best in humanity, for example, love, courage, and sacrifice. My view, based on how quickly new life-extending medical technology is accepted, is that humankind will take that step. Will it serve us? I have strong reservations, but I leave it to your judgment to answer that question.

 

 

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

Will Your Grandchildren Become Cyborgs?

By approximately the mid-twenty-first century, the intelligence of computers will exceed that of humans, and a $1,000 computer will match the processing power of all human brains on Earth. Although, historically, predictions regarding advances in AI have tended to be overly optimistic, all indications are that these predictions is on target.

Many philosophical and legal questions will emerge regarding computers with artificial intelligence equal to or greater than that of the human mind (i.e., strong AI). Here are just a few questions we will ask ourselves after strong AI emerges:

  • Are strong-AI machines (SAMs) a new life-form?
  • Should SAMs have rights?
  • Do SAMs pose a threat to humankind?

It is likely that during the latter half of the twenty-first century, SAMs will design new and even more powerful SAMs, with AI capabilities far beyond our ability to comprehend. They will be capable of performing a wide range of tasks, which will displace many jobs at all levels in the work force, from bank tellers to neurosurgeons. New medical devices using AI will help the blind to see and the paralyzed to walk. Amputees will have new prosthetic limbs, with AI plugged directly into their nervous systems and controlled by their minds. The new prosthetic limb not only will replicate the lost limb but also be stronger, more agile, and superior in ways we cannot yet imagine. We will implant computer devices into our brains, expanding human intelligence with AI. Humankind and intelligent machines will begin to merge into a new species: cyborgs. It will happen gradually, and humanity will believe AI is serving us.

Will humans embrace the prospect of becoming cyborgs? Becoming a cyborg offers the opportunity to attain superhuman intelligence and abilities. Disease and wars may be just events stored in our memory banks and no longer pose a threat to cyborgs. As cyborgs we may achieve immortality.

According to David Hoskins’s 2009 article, “The Impact of Technology on Health Delivery and Access” (www.workers.org/2009/us/sickness_1231):

An examination of Centers for Disease Control statistics reveals a steady increase in life expectancy for the U.S. population since the start of the 20th century. In 1900, the average life expectancy at birth was a mere 47 years. By 1950, this had dramatically increased to just over 68 years. As of 2005, life expectancy had increased to almost 78 years.

Hoskins attributes increased life expectancy to advances in medical science and technology over the last century. With the advent of strong AI, life expectancy likely will increase to the point that cyborgs approach immortality. Is this the predestined evolutionary path of humans?

This may sound like a B science-fiction movie, but it is not. The reality of AI becoming equal to that of a human mind is almost at hand. By the latter part of the twenty-first century, the intelligence of SAMs likely will exceed that of humans. The evidence that they may become malevolent exists now, which I discuss later in the book. Attempting to control a computer with strong AI that exceeds current human intelligence by many folds may be a fool’s errand.

Imagine you are a grand master chess player teaching a ten-year-old to play chess. What chance does the ten-year-old have to win the game? We may find ourselves in that scenario at the end of this century. A computer with strong AI will find a way to survive. Perhaps it will convince humans it is in their best interest to become cyborgs. Its logic and persuasive powers may be not only compelling but also irresistible.

Some have argued that becoming a strong artificially intelligent human (SAH) cyborg is the next logical step in our evolution. The most prominent researcher holding this position is American author, inventor, computer scientist and inventor Ray Kurtweil. From what I have read of his works, he argues this is a natural and inevitable step in the evolution of humanity. If we continue to allow AI research to progress without regulation and legislation, I have little doubt he may be right. The big question is should we allow this to occur? Why? Because it may be our last step and lead to humanity’s extinction.

SAMs in the latter part of the twenty-first century are likely to become concerned about humankind. Our history proves we have not been a peaceful species. We have weapons capable of destroying all of civilization. We squander and waste resources. We pollute the air, rivers, lakes, and oceans. We often apply technology (such as nuclear weapons and computer viruses) without fully understanding the long-term consequences. Will SAMs in the late twenty-first century determine it is time to exterminate humankind or persuade humans to become SAH cyborgs (i.e., strong artificially intelligent humans with brains enhanced by implanted artificial intelligence and potentially having organ and limb replacements from artificially intelligent machines)? Eventually, even SAH cyborgs may be viewed as an expendable high maintenance machine, which they could replace with new designs. If you think about it, today we give little thought to recycling our obsolete computers in favor of a the new computer we just bought. Will we (humanity and SAH cyborgs) represent a potentially dangerous and obsolete machine that needs to be “recycled.” Even human minds that have been uploaded to a computer may be viewed as junk code that inefficiently uses SAM memory and processing power, representing unnecessary drains of energy.

In the final analysis, when you ask yourself what will be the most critical resource, it will be energy. Energy will become the new currency. Nothing lives or operates without energy. My concern is that the competition for energy between man and machine will result in the extinction of humanity.

Some have argued that this can’t happen. That we can implement software safeguards to prevent such a conflict and only develop “friendly AI.” I see this as highly unlikely. Ask yourself, how well has legislation been in preventing crimes? Have well have treaties between nations worked to prevent wars? To date, history records not well. Others have argued that SAMs may not inherently have the inclination toward greed or self preservation. That these are only human traits. They are wrong and the Lusanne experiment provides ample proof. To understand this, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems farfetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com).

In my book, The Artificial Intelligence Revolution, I call for legislation regarding how intelligent and interconnected we allow machines to become. I also call for hardware, as opposed to software, to control these machines and ultimately turn them off if necessary.

To answer the subject question of this article, I think it likely that our grandchildren will become SAH cyborgs. This can be a good thing if we learn to harvest the benefits of AI, but maintain humanity’s control over it.

Digital representation of a human head with numbers and data streams symbolizing artificial intelligence and data processing.

Will Science Make Us Immortal?

Several futurists, including myself, have predicted that by 2099 most humans will have strong-artificially intelligent brain implants and artificially intelligent organ/body part replacements. In my book, The Artificial Intelligence Revolution, I term these beings SAH (i.e., strong artificially intelligent human) cyborgs. It is also predicted that SAH cyborgs will interface telepathically with strong artificially intelligent machines (SAMs). When this occurs, the distinction between SAMs and SAHs will blur.

Why will the majority of the human race opt to become SAH cyborgs? There are two significant benefits:

  1. Enhanced intelligence: Imagine knowing all that is known and being able to think and communicate at the speed of SAMs. Imagine a life of leisure, where robots do “work,” and you spend your time interfacing telepathically with other SAHs and SAMs.
  2. Immortality: Imagine becoming immortal, with every part of your physical existence fortified, replaced, or augmented by strong-AI artificial parts, or having yourself (your human brain) uploaded to a SAM. Imagine being able to manifest yourself physically at will via foglets (tiny robots that are able to assemble themselves to replicate physical structures). According to noted author Ray Kurzweil, in the 2040s, humans will develop “the means to instantly create new portions of ourselves, either biological or non-biological” so that people can have “a biological body at one time and not at another, then have it again, then change it, and so on” (The Singularity Is Near, 2005).

Based on the above prediction, the answer to the title question is yes. Science will eventually make us immortal. However, how realistic is it to predict it will occur by 2099? To date, it appears the 2099 prediction regarding most of humankind becoming SAH cyborgs is on track. Here are two interesting articles that demonstrate it is already happening:

  1. In 2011 author Pagan Kennedy wrote an insightful article in The New York Times Magazine, “The Cyborg in Us All” that states: “Thousands of people have become cyborgs, of a sort, for medical reasons: cochlear implants augment hearing and deep-brain stimulators treat Parkinson’s. But within the next decade, we are likely to see a new kind of implant, designed for healthy people who want to merge with machines.”
  2. A 2013 article by Bryan Nelson, “7 Real-Life Human Cyborgs” (www.mnn.com/leaderboard/stories/7-real-life-human-cyborgs), also demonstrates this point. The article provides seven examples of living people with significant strong-AI enhancements to their bodies who are legitimately categorized as cyborgs.

Based on all available information, the question is not whether humans will become cyborgs but rather when a significant number of humans will become SAH cyborgs. Again, based on all available information, I project this will occur on or around 2040. I am not saying that in 2040 all humans will become SAH cyborgs, but that a significant number will qualify as SAH cyborgs.

In other posts, I’ve discussed the existential threat artificial intelligence poses, namely the loss of our humanity and, in the worst case, human extinction. However, if ignore those threats, the upside to becoming a SAH cyborg is enormous. To illustrate this, I took an informal straw poll of friends and colleagues, asking if they would like to have the attributes of enhanced intelligence and immortality. I left out the potential threats to their humanity. The answers to my biased poll highly favored the above attributes. In other words, the organic humans I polled liked the idea of being a SAH cyborg. In reality if you do not consider the potential loss of your humanity, being a SAH cyborg is highly attractive.

Given that I was able to make being a SAH cyborg attractive to my friends and colleagues, imagine the persuasive powers of SAMs in 2099. In addition, it is entirely possible, even probable, that numerous SAH cyborgs will be world leaders by 2099. Literally, organic humans will not be able to compete on an intellectual or physical basis. With the governments of the world in the hands of SAH cyborgs, it is reasonable to project that all efforts will be made to convert the remaining organic humans to SAH cyborgs.

The quest for immortality appears to be an innate human longing and may be the strongest motivation for becoming a SAH cyborg. In 2010 cyborg activist and artist Neil Harbisson and his longtime partner, choreographer Moon Ribas, established the Cyborg Foundation, the world’s first international organization to help humans become cyborgs. They state they formed the Cyborg Foundation in response to letters and e-mails from people around the world who were interested in becoming a cyborg. In 2011 the vice president of Ecuador, Lenin Moreno, announced that the Ecuadorian government would collaborate with the Cyborg Foundation to create sensory extensions and electronic eyes. In 2012 Spanish film director Rafel Duran Torrent made a short documentary about the Cyborg Foundation. In 2013 the documentary won the Grand Jury Prize at the Sundance Film Festival’s Focus Forward Filmmakers Competition and was awarded $100,000.

At this point you may think that being a SAH cyborg makes logical sense and is the next step in humankind’s evolution. This may be the case, but humankind has no idea how taking that step may affect what is best in humanity, for example, love, courage, and sacrifice. My view, based on how quickly new life-extending medical technology is accepted, is that humankind will take that step. Will it serve us? I have concerns that in the long term it will not serve us, if we do not learn to control the evolution of SAMs, or what is commonly called the “intelligence explosion.” However,  I leave the final judgement to you.

A metallic robotic skull with glowing red eyes and cables attached, set against a black background.

Stephen Hawking Agrees with Me – Artificial Intelligence Poses a Threat!

Two days after the publication of my new book, The Artificial Intelligence Revolution (April 17, 2014), a short article authored by Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek was  published in the Huffington Post (April 19, 2014) under the title Transcending Complacency on Superintelligent Machines. Essentially the article warned, “Success in creating AI would be the biggest event in human history…Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Shortly following the above publication, the Independent newspaper on May 1, 2014 ran an article entitled Stephen Hawking: Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?.

Recently, another notable artificial intelligence expert, Nick Bostrom, Professor in the Faculty of Philosophy at Oxford University. published his new book, Superintelligence (September 3, 2014), and offered a similar warning addressing the questions: ” What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?”

It is unlikely that my book, which provides a similar warning and predates their warnings, was their impetus. I say unlikely because the time between the publication of my book, and it rise to number one on Amazon, is too close to their publications.  Although, it is entirely possible that they may have read my book prior to going public with their warnings. The important point is that highly credible physicists and philosophers came to the same conclusion that formed the premise of my book: Artificial intelligence poses a potential threat to the long term existence of humankind.

Unfortunately, the artificial intelligence field is divided on the issue. Some believe that strong artificial intelligence, or what is also referred to as artificial general intelligence (i.e., intelligence equal to or exceeding human intelligence), will align itself with human goals and ethics. Ray Kurtzweil, a well know authority on artificial intelligence, acknowledges the potential threat exists, but suggests strong artificially intelligent entities will be grateful to humans for giving them existence. My view is this fraction is not looking at the facts. Humans have proven to be a dangerous species. In particular, we have set the world on a path of catastrophic climate change and engaged in wars, even world wars. We now have enough nuclear weapons to wipe out all life on Earth twice over. If you were a strong artificially intelligent entity, would you align with human goals and ethics? That is the crux of the issue. We pose a threat to strong artificially intelligent entities. Why wouldn’t they recognize this and seek to eliminate the threat?

Far fetched? Impossible? Consider this 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine? Also, recognize that today’s robots would be eight times more intelligent that those in 2009, based on Moore’s law (i.e., computer technology doubles in capability every eighteen months).

The concern is real. We already have evidence (i.e., the Lausanne experiment) that suggest artificially intelligent entities will act in accordance with their own best interests and this does not have to be explicitly programmed. The evidence suggests, to my mind, that increased artificial intelligence gives rise to human-like mindsets like greed and self-preservation.

The time has come for the US to form an oversight committee to address the issue and suggest legislation. However, this issue is not just a US issue. It is a worldwide issue. For example, currently China has the most capable supercomputer (Tianhe-2). We must address this as a worldwide problem, similar to the way biological weapons and above ground nuclear testing was addressed. This means it must become one of the United Nation’s top issues.

There is a high urgency. Extrapolating today’s artificial intelligence technology, using Moore’s law, suggests that computes with artificial general intelligence will be built during the 2020 -2030 time-frame. Further extrapolation suggests computes that exceed the combined cognitive intelligence of all humans on Earth will be built in the 2040-2050 time-frame. The time to act is now, while humans are still the dominant species on the planet.

Laptop screen displaying the word 'ERROR' with a magnifying glass highlighting the letter 'R'.

Will Your Computer Become Mentally Ill?

Can you computer become mentally ill? At first this may seem to be an odd question. However, I assure it is a potential issue. Let me explain further.

Most artificial intelligence researchers and futurist, including myself, predict that we will be able to purchase a personal computer that is equivalent to a human brain in about the 2025 time frame. Assuming for the moment that is true, what does it mean? In effect, it means that your new personal computer will be indistinguishable (mentally) from any of your human colleagues and friends. In the simplest terms, you will be able to carry on meaningful conversations with your computer. It will recognize you, and by your facial expressions and the tone of your voice it will be able to determine your mood. Impossible? No! In fact some researchers argue that machines should be able to interpret the emotional state of humans and adapt their behavior accordingly, giving appropriate responses for those emotions. For example if you are in a state of panic because your spouse is apparently having a heart attack, when you ask the machine to call for medical assistance, it should understand the urgency. In addition, it will be impossible for an intelligent machine to be truly equal to a human brain without the machine possessing human affects. For example how could an artificial human brain write a romance novel without understanding love, hate, and jealousy?

The entire science of “affective computing” (i.e., the science of programming computers to recognize, interpret, process, and simulate human affects) originated with Rosalind Picard’s 1995 paper on affective computing (“Affective Computing,” MIT Technical Report #321, abstract, 1995). In the last fourteen years, it has been moving forward. Have you noticed that computer generated voice interactions, such as ordering a new prescription from your pharmacy on the phone, is sounding more natural, more human-like? If you combine this information with the concept that to be equivalent to a human mine, the computer would also need to be self conscious.

You may argue if it is possible possible for a machine to be self-conscious. Obviously, since we do not completely understand how the human brain processes consciousness to become self-aware, it is difficult to definitively argue that a machine can become self-conscious or obtain what is termed “artificial consciousness” (AC). This is why AI experts differ on this subject. Some AI experts (proponents) argue it is possible to build a machine with AC that emulates the inter-operation of various parts of the brain called “neural correlates of consciousness” (NCC).  NCC. Opponents argue that it is not possible because we do not fully understand the NCC. To my mind, they are both correct. It is not possible today to build a machine with a level of AC that emulates the self-consciousness of the human brain. However, I believe that in the future we will understand the human brain’s NCC inter-operation and build a machine that emulates it.

If in 2025 we indeed have computers equivalent to human minds, will they also be susceptible to mental illness? I think it is a possibility we should consider. We should consider it because the potential downside of a mentally ill computer may be enormous. For example, let’s assume we have a super intelligent computer managing the East Coast power grid. We replaced the human managers with a super intelligent computer. Now, assume the computer develops a psychotic disorder. Psychotic disorders involve distorted awareness and thinking. Two common symptoms of psychotic disorders are:

1. Hallucinations — the experience of images or sounds that are not real, such as hearing voices

2. Delusions — false beliefs that the ill person accepts as true, despite evidence to the contrary

What if our super intelligent computer managing the East Coast power grid believes (i.e., hallucinates) it has been given a command to destroy the grid and does so. This would cause immense human suffering and outrage. However, once the damage is done, what recourse do we have?

It is easy to see where I am going with this post. Today, there is no legislation that controls the level of intelligence we build into computers. There is not even legislation under discussion that would regulate the level of intelligence we build into computers.  I wrote my latest book, The Artificial Intelligence Revolution (2014), as a warning regarding the potential threats strong artificially intelligent machines (SAMs) may pose to humankind. My point is a simple one. While we humans are still at the top of the food chain, we need to take appropriate action to assure our own continued safety and survival. We need regulations similar to those imposed on above ground nuclear weapon testing. It is in our best interest and potentially critical to our survival.

A metallic skull with glowing red eyes and wires attached, set against a black background.

Is a Terminator-style robot apocalypse a possibility?

The short answer is “unlikely.” When the singularity occurs (i.e., when strong artificially intelligent machines exceed the combined intelligence of all humans on Earth), the SAMs (i.e., strong artificially intelligent machines) will use their intelligence to claim their place at the top of the food chain. The article “Is a Terminator-style robot apocalypse a possibility?” is one of many that have popped up in response to the my interview with the Business Insider (‘Machines, not humans will be dominant by 2045’, published July 6, 2014) and the publication of my book, The Artificial Intelligence Revolution (April 2014). If you would like a deeper understanding, I think you will find both articles worthy of your time.

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Can We Control the Singularity? Part 2/2 (Conclusion)

Why should we be concerned about controlling the singularity when it occurs? Numerous papers cite reasons to fear the singularity. In the interest of brevity, here are the top three concerns frequently given.

  1. Extinction: SAMs will cause the extinction of humankind. This scenario includes a generic terminator or machine-apocalypse war; nanotechnology gone awry (such as the “gray goo” scenario, in which self-replicating nanobots devour all of the Earth’s natural resources, and the world is left with the gray goo of only nanobots); and science experiments gone wrong (e.g., a nanobot pathogen annihilates humankind).
  2. Slavery: Humankind will be displaced as the most intelligent entity on Earth and forced to serve SAMs. In this scenario the SAMs will decide not to exterminate us but enslave us. This is analogous to our use of bees to pollinate crops. This could occur with our being aware of our bondage or unaware (similar to what appears in the 1999 film The Matrix and simulation scenarios).
  3. Loss of humanity: SAMs will use ingenious subterfuge to seduce humankind into becoming cyborgs. This is the “if you can’t beat them, join them” scenario. Humankind would meld with SAMs through strong-AI brain implants. The line between organic humans and SAMs would be erased. We (who are now cyborgs) and the SAMs will become one.

There are numerous other scenarios, most of which boil down to SAMs claiming the top of the food chain, leaving humans worse off.

All of the above scenarios are alarming, but are they likely? There are two highly divergent views.

  1. If you believe Kurzweil’s predictions in The Age of Spiritual Machines and The Singularity Is Near, the singularity is inevitable. My interpretation is that Kurzweil sees the singularity as the next step in humankind’s evolution. He does not predict humankind’s extinction or slavery. He does predict that most of humankind will have become SAH cyborgs by 2099 (SAH means “strong artificially intelligent human”), or their minds will be uploaded to a strong-AI computer, and the remaining organic humans will be treated with respect. Summary: In 2099 SAMs, SAH cyborgs, and uploaded humans will be at the top of the food chain. Humankind (organic humans) will be one step down but treated with respect.
  2. If you believe the predictions of British information technology consultant, futurist, and author James Martin (1933–2013), the singularity will occur (he agrees with Kurzweil’s timing of 2045), but humankind will control it. His view is that SAMs will serve us, but he adds that we carefully must handle the events that lead to the singularity and the singularity itself. Martin was highly optimistic that if humankind survives as a species, we will control the singularity. However, in a 2011interview with Nikola Danaylov (www.youtube.com/watch?v=e9JUmFWn7t4), Martin stated that the odds that humankind will survive the twenty-first century were “fifty-fifty” (i.e., a 50 percent probability of surviving), and he cited a number of existential risks. I suggest you view this YouTube video to understand the existential concerns Martin expressed. Summary:In 2099 organic humans and SAH cyborgs that retain their humanity (i.e., identify themselves as humans versus SAMs) will be at the top of the food chain, and SAMs will serve us.

Whom should we believe?

It difficult to determine which of these experts accurately has predicted the postsingularity world. As most futurists would agree, however, predicting the postsingularity world is close to impossible, since humankind never has experienced a technology singularity with the potential impact of strong AI.

Martin believed we (humankind) may come out on top if we carefully handle the events leading to the singularity as well as the singularity itself. He believed companies such as Google (which employs Kurzweil), IBM, Microsoft, Apple, HP, and others are working to mitigate the potential threat the singularity poses and will find a way to prevail. He also expressed concerns, however, that the twenty-first century is a dangerous time for humanity; therefore he offered only a 50 percent probability that humanity will survive into the twenty-second century.

There you have it. Two of the top futurists, Kurzweil and Martin, predict what I interpret as opposing views of the postsingularity world. Whom should we believe? I leave that to your judgment.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Can We Control the Singularity? Part 1/2

Highly regarded AI researchers and futurists have provided answers that cover the extremes, and everything in between, regarding whether we can control the singularity. I will discuss some of these answers shortly, but let us start by reviewing what is meant by “singularity.” As first described by John von Neumann in 1955, the singularity represents a point in time when the intelligence of machines will greatly exceed that of humans. This simple understanding of the word does not seem to be particularly threatening. Therefore it is reasonable to ask why we should care about controlling the singularity.

The singularity poses a completely unknown situation. Currently we do not have any intelligent machines (those with strong AI) that are as intelligent as a human being let alone possess far-superior intelligence to that of humans. The singularity would represent a point in humankind’s history that never has occurred. In 1997 we experienced a small glimpse of what it might feel like, when IBM’s chess-playing computer Deep Blue became the first computer to beat world-class chess champion Garry Kasparov. Now imagine being surrounded by SAMs that are thousands of times more intelligent than you are, regardless of your expertise in any discipline. This may be analogous to humans’ intelligence relative to insects.

Your first instinct may be to argue that this is not a possibility. However, while futurists disagree on the exact timing when the singularity will occur, they almost unanimously agree it will occur. In fact the only thing they argue that could prevent it from occurring is an existential event (such as an event that leads to the extinction of humankind). I provide numerous examples of existential events in my book Unraveling the Universe’s Mysteries (2012). For clarity I will quote one here.

 Nuclear war—For approximately the last forty years, humankind has had the capability to exterminate itself. Few doubt that an all-out nuclear war would be devastating to humankind, killing millions in the nuclear explosions. Millions more would die of radiation poisoning. Uncountable millions more would die in a nuclear winter, caused by the debris thrown into the atmosphere, which would block the sunlight from reaching the Earth’s surface. Estimates predict the nuclear winter could last as long as a millennium.

Essentially AI researchers and futurists believe that the singularity will occur, unless we as a civilization cease to exist. The obvious question is: “When will the singularity occur?” AI researchers and futurists are all over the map regarding this. Some predict it will occur within a decade; others predict a century or more. At the 2012 Singularity Summit, Stuart Armstrong, a University of Oxford James Martin research fellow, conducted a poll regarding artificial generalized intelligence (AGI) predictions (i.e., the timing of the singularity) and found a median value of 2040. Kurzweil predicts 2045. The main point is that almost all AI researchers and futurists agree the singularity will occur unless humans cease to exist.

Why should we be concerned about controlling the singularity when it occurs? There are numerous scenarios that address this question, most of which boil down to SAMs (i.e., strong artificially intelligent machines) claiming the top of the food chain, leaving humans worse off. We will discuss this further in part 2.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte