Category Archives: Threats to Humankind

Book cover titled 'Nanoweapons: Growing Threat to Humanity' by Louis A. Del Monte, featuring a small insect image.

Nanoweapons: A Growing Threat to Humanity

In 2008, experts surveyed at the Global Catastrophic Risk Conference at the University of Oxford judged nanoweapons to be the #1 threat to humanity’s survival in the 21st century. The majority of people, worldwide, have never even heard of nanoweapons. Yet, a new nanoweapons arms race is raging between the United States, China, and Russia. Each side is spending billions of dollars to gain dominance in nanoweapons. Nanoweapons are based on nanotechnology. This naturally begs the question, What is nanotechnology? According to the United States National Nanotechnology Initiative’s website, nano.gov, “Nanotechnology is science, engineering, and technology conducted at the nanoscale, which is about 1 to 100 nanometers.” A nanometer is about 1/1000 the diameter of a human hair. Ironically, the next big thing in military weapons will be small and invisible to the naked eye. But make no mistake, nanoweapons promise to be potentially more destructive and harder to control than nuclear weapons. They may the last weapons humanity invents, paving our way to extinction.

In this short post, my goal is to introduce nanoweapons and their potential to lead to human extinction. My new book (available for pre-order on Amazon), Nanoweapons: A Growing Threat to Humanity, describes this new class of military weapons in layperson prose. It discusses the nanoweapons in development and deployment. It projects the nanoweapons likely to dominate the future battlefield in the second half of this century. It addresses a critical question, Will it be possible to develop, deploy and use nanoweapons in warfare, without rendering humanity extinct? Nanoweapons: A Growing Threat to Humanity is the first book to broach the subject. My goal in writing the book is summed up in a quote by Thomas Jefferson, “An informed citizenry is at the heart of a dynamic democracy.” I invite you to become “informed,” thus forewarned. Our future is in the balance.

A view of Earth and the Moon against the blackness of space, showing Earth's blue oceans and white clouds.

Why is Earth’s Moon Leaving Us?

Most people don’t know this scientific fact, but the Earth’s Moon is slowing moving further from the Earth. Each year its orbit around the earth experiences a mean recession rate of 2.16 cm/year (less than an inch, since approximately 2.5 cm = 1 inch).

What causes this? As the moon’s gravity pulls on the Earth, the Earth’s gravity pulls on the moon, making the Moon slightly egg-shaped. In addition, tidal friction, caused by the movement of the tidal bulge around the Earth, takes energy out of the Earth and puts it into the Moon’s orbit, making the Moon’s orbit bigger and slower. Thus, not only is the orbit of the moon getting bigger, it is slowing down. Another startling fact is that the Earth’s rotation is slowing down because of the energy lost to the Moon’s orbit.

How real is this effect? To answer this question, let us consider how the Earth’s Moon was formed. Most astrophysicists contend the Moon was formed when a  proto-planet (named Theia after a Greek goddess) about the size of Mars collided with the Earth around 4.5 billion years ago. After the collision, the debris left over from the impact coalesced to form the Moon. Initially, our newly formed Moon orbited the Earth at 22,500 km (14,000 miles) away, compared with 402,336 km (~250,000 miles)  today.

This theory, regarding the Moon’s formulation and gradual recession from the Earth, has been mathematically modeled. Computer simulations of such an impact are consistent with the Earth Moon system we currently observe. There is also physical evidence. Paleontological evidence  of tidal rhythmites, also known as tidally laminated sediments, confirms the above theory. 

What is this going to mean to us on Earth? The speed at which the Moon is moving away from Earth will eventually affect life on the planet, but it will take billions of years for the effect to become significant. Given that Archaic Homo sapiens, the forerunner of anatomically modern humans, evolved between 400,000 and 250,000 years ago, and our progress from cave dwellers to space adventurers during our existence on Earth, it is likely we will have colonized new Earths long before the Moon’s orbit threatens our existence.

There are numerous scholarly papers that delineate the mathematics and palentological evidence in detail. However, they all come to essentially the same conclusion. The Moon is moving further away from the Earth each year.

A detailed side view of a futuristic humanoid robot with intricate mechanical components against a plain background.

Are You Destined to Become a Cyborg?

The most basic definition of a cyborg is a being with both organic and cybernetic (artificial) parts. Taking this definition too literally, however, would suggest that almost every human in a civilized society is a cyborg. For example, if you have a dental filling, then you have an artificial part, and by the above definition, you are (literally) a cyborg. If we choose to restrict the definition to advanced artificial parts/machines, however, we must realize that many humans have artificial devices to replace hips, knees, shoulders, elbows, wrists, jaws, teeth, skin, arteries, veins, heart valves, arms, legs, feet, fingers, and toes, as well as “smart” medical devices, such as heart pacemakers and implanted insulin pumps to assist their organic functions. This more restrictive interpretation qualifies them as cyborgs. This definition, however, does not highlight the major element (and concern) regarding becoming a cyborg, namely, strong-AI brain implants.

While humans have used artificial parts for centuries (such as wooden legs), generally they still consider themselves human. The reason is simple: Their brains remain human. Our human brains qualify us as human beings. In my book, The Artificial Intelligence Revolution (2014), I predicted that by 2099 most humans will have strong-AI brain implants and interface telepathically with SAMs (i.e., strong artificially intelligent machines). I also argued the distinction between SAMs and humans with strong-AI brain implants will blur. Humans with strong-AI brain implants will identify their essence with SAMs. These cyborgs (strong-AI humans with cybernetically enhanced bodies), whom I call SAH (i.e., strong artificially intelligent human) cyborgs, represent a potential threat to humanity. It is unlikely that organic humans will be able to intellectually comprehend this new relationship and interface meaningfully (i.e., engage in dialogue) with either SAMs or SAHs.

Let us try to understand the potential threats and benefits related to what becoming a SAH cyborg represents. From the standpoint of intelligence, SAH cyborgs and SAMs will be at the top of the food chain. Humankind (organic humans) will be one step down. We, as organic humans, have been able to dominate the planet Earth because of our intelligence. When we no longer are the most intelligent entities on Earth, we will face numerous threats, similar to the threats we pose to other species. This will include extinction of organic humans, slavery of organic humans, and loss of humanity (strong-AI brain implants cause SAHs to identify with intelligent machines, not organic humans).

While the above summaries capsulize the threats posed by SAMs and SAHs, I have not discussed the benefits. There are significant benefits to becoming a SAH cyborg, including:

  • Enhanced intelligence: Imagine knowing all that is known and being able to think and communicate at the speed of SAMs. Imagine a life of leisure, where robots do “work,” and you spend your time interfacing telepathically with other SAHs and SAMs.
  • Immortality: Imagine becoming immortal, with every part of your physical existence fortified, replaced, or augmented by strong-AI artificial parts, or having yourself (your human brain) uploaded to a SAM. Imagine being able to manifest yourself physically at will via foglets (tiny robots that are able to assemble themselves to replicate physical structures).

Will you become a cyborg? Yes, many of us already qualify as cyborgs, based on the discussion above. Will we become SAH cyborgs? I think it likely, based on how quickly humans adopt medical technology. The lure of superior intelligence and immortality may be irresistible.

My point in writing this article was to delineate the pros and cons of becoming a SAH cyborg? Many young people will have to decide if that is the right evolutionary path for themselves.

A white military drone equipped with missiles flying against a clear sky.

The Robot Wars Are Coming

When I say “the robot wars are coming,” I am referring to the increase in the US Department of Deference’s use of robotic systems and artificial intelligence in warfare.

Recently, September 12, 2014, the US Department of Defense released a report, DTP 106: Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions. Its authors, James Kadtke and Linton Wells II, delineate the potential benefits and concerns of Robotics, Artificial Intelligence and associated technologies, as they relate to the future of warfare, stating: “This paper examines policy, legal, ethical, and strategy implications for national security of the accelerating science, technology, and engineering (ST&E) revolutions underway in five broad areas: biology, robotics, information, nanotechnology, and energy (BRINE), with a particular emphasis on how they are interacting. The paper considers the time frame between now and 2030 but emphasizes policy and related choices that need to be made in the next few years.” Their  conclusions were shocking:

  • They express concerns about maintaining the US Department of Defense’s present technological preeminence, as other nations and companies in the private sector take the lead in developing robotics, AI and human augmentation such as exoskeletons.
  • They warn that “The loss of domestic manufacturing capability for cutting-edge technologies means the United States may increasingly need to rely on foreign sources for advanced weapons systems and other critical components, potentially creating serious dependencies. Global supply chain vulnerabilities are already a significant concern, for example, from potential embedded “kill switches,” and these are likely to worsen.”
  • The most critical concern they express, in my view, is “In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries, although the political and social ramifications of such systems will likely be significant. One negative aspect of these trends, however, lies in the risks that are possible due to unforeseen vulnerabilities that may arise from the large scale deployment of smar automated systems, for which there is little practical experience. An emerging risk is the ability of small scale or terrorist groups to design and build functionally capable unmanned systems which could perform a variety of hostile missions.”

It becomes obvious by reading this report and numerous similar reports, that the face of warfare is rapidly changing. It’s hard to believe we’ve come to this point, if you consider that 15 years ago Facebook and Twitter did not exist and Google was just getting started. However, even 15 years ago, drones played a critical role in warfare. For example, it was a Predator mission that located Osama bin Laden in Afghanistan in 2000. While drones were used as early as World War II for surveillance, it wasn’t until 2001 that missile-equipped drones were completed with the deployment of Predators drones, armed with Hellfire missiles. Today, one in every three fighter planes is a drone. How significant is this change? According to Richard Pildes, a professor of constitutional law at New York University’s School of Law, “Drones are the most discriminating use of force that has ever been developed. The key principles of the laws of war are necessity, distinction and proportionality in the use of force. Drone attacks and targeted killings serve these principles better than any use of force that can be imagined.”

Where is this all headed? Within the near future, the US military will deploy completely autonomy “Kill Bots.” There are robots that are programmed to engage and destroy the enemy without human oversight or control. Science fiction? No! According a 2014 media release from officials at the Office of Naval Research (ONR), a technological breakthrough will allow any unmanned surface vehicle (USV) to not only protect Navy ships, but also, for the first time, autonomously “swarm” offensively on hostile vessels. In my opinion, autonomous Predator drones are likely either being developed or have been developed, but the information remains classified.

Artificial intelligence and robotic systems are definitely changing the face of warfare. Within a decade, I judge, based on the current trends, that about half of the offensive capability of the US Department of Deference will consist of Kill Bots in one form or another, and a large percentage of them will be autonomous.

This suggest two things to me regarding the future of warfare:

  1. Offensively fighting wars will become more palatable to the US public because machines, not humans, will perform the lion’s share of the most dangerous missions.
  2. US adversaries are also likely to use Kill Bots against us, as adversarial nations develop similar technology.

This has prompted a potential United Nations moratorium on autonomous weapons systems. To quote the US DOD report DTP 106, “Perhaps the most serious issue is the possibility of robotic systems that can autonomously decide when to take human life. The specter of Kill Bots waging war without human guidance or intervention has already sparked significant political backlash, including a potential United Nations moratorium on autonomous weapons systems. This issue is particularly serious when one considers that in the future, many countries may have the ability to manufacture, relatively cheaply, whole armies of Kill Bots that could autonomously wage war. This is a realistic possibility because today a great deal of cutting-edge research on robotics and autonomous systems is done outside the United States, and much of it is occurring in the private sector, including DIY robotics communities. The prospect of swarming autonomous systems represents a challenge for nearly all current weapon systems.”

There is no doubt that the robot wars are coming. The real question is: Will humanity survive the robot wars?

 

 

 

 

 

Abstract digital illustration of a glowing microchip with data streams and blue light effects.

Will Artificial Intelligence Result in the Merger of Man and Machine?

Will humankind’s evolution merge with strong artificially intelligent machines (SAMs)? While no one really knows the answer to this question, many who are engaged in the development of artificial intelligence assert the merger will occur. Let’s understand what this means and why it is likely to occur.

While humans have used artificial parts for centuries (such as wooden legs), generally they still consider themselves human. The reason is simple: Their brains remain human. Our human brains qualify us as human beings. However,  by 2099 most humans will have strong-AI brain implants and interface telepathically with SAMs. This means the distinction between SAMs and humans with strong-AI brain implants, or what is termed “strong artificially intelligent humans” (i.e., SAH cyborgs), will blur. There is a strong probability, when this occurs, humans with strong-AI brain implants will identify their essence with SAMs. These cyborgs (strong-AI humans with cybernetically enhanced bodies), SAH cyborgs, represent a potential threat to humanity, which we’ll discuss below. It is unlikely that organic humans will be able to intellectually comprehend this new relationship and interface meaningfully (i.e., engage in dialogue) with either SAMs or SAHs.

Let us try to understand the potential threats and benefits related to what becoming a SAH cyborg represents. In essence, the threats are the potential extinction of organic humans, slavery of organic humans, and loss of humanity (strong-AI brain implants may cause SAHs to identify with intelligent machines, not organic humans, as mentioned above). Impossible? Unlikely? Science fiction? No! Let understand first why organic humans may choose to become SAH cyborgs.

There are significant benefits to becoming a SAH cyborg, including:

  • Enhanced intelligence: Imagine knowing all that is known and being able to think and communicate at the speed of SAMs. Imagine a life of leisure, where robots do “work,” and you spend your time interfacing telepathically with other SAHs and SAMs.
  • Immortality: Imagine becoming immortal, with every part of your physical existence fortified, replaced, or augmented by strong-AI artificial parts, or having yourself (your human brain) uploaded to a SAM. Imagine being able to manifest yourself physically at will via foglets (tiny robots that are able to assemble themselves to replicate physical structures). In my book, The Artificial Intelligent Revolution, I delineate the technology trends that suggests by the 2040s humans will develop the means to instantly create new portions of ourselves, either biological or non-biological, so that people can have a physical body at one time and not at another, as they choose.

To date, predictions regarding regarding most of humankind becoming SAH cyborgs by 2099 is on track to becoming a reality. An interesting 2013 article by Bryan Nelson, “7 Real-Life Human Cyborgs” (www.mnn.com/leaderboard/stories/7-real-life-human-cyborgs), demonstrates this point. The article provides seven examples of living people with significant strong-AI enhancements to their bodies who are legitimately categorized as cyborgs. In addition, in 2011 author Pagan Kennedy wrote an insightful article in The New York Times Magazine, “The Cyborg in Us All” that states: “Thousands of people have become cyborgs, of a sort, for medical reasons: cochlear implants augment hearing and deep-brain stimulators treat Parkinson’s. But within the next decade, we are likely to see a new kind of implant, designed for healthy people who want to merge with machines.”

Based on all available information, the question is not whether humans will become cyborgs but rather when a significant number of humans will become SAH cyborgs. Again, based on all available information, I believe this will begin to significantly occur the 2040. I am not saying that in 2040 all humans will become SAH cyborgs but that a significant number will qualify as SAH cyborgs. I do predict, along with other AI futurists, that by 2099 most humans in technologically advanced nations will become SAH cyborgs. I also predict the leaders of many of those nations will be SAH cyborgs. The reasoning behind my last prediction is simple. SAH cyborgs will be intellectually and physically superior to organic humans in every regard. In effect, they will be the most qualified to assume leadership positions.

The quest for immortality appears to be an innate human longing and may be the strongest motivation for becoming a SAH cyborg. In 2010 cyborg activist and artist Neil Harbisson and his longtime partner, choreographer Moon Ribas, established the Cyborg Foundation, the world’s first international organization to help humans become cyborgs. They state they formed the Cyborg Foundation in response to letters and e-mails from people around the world who were interested in becoming a cyborg. In 2011 the vice president of Ecuador, Lenin Moreno, announced that the Ecuadorian government would collaborate with the Cyborg Foundation to create sensory extensions and electronic eyes. In 2012 Spanish film director Rafel Duran Torrent made a short documentary about the Cyborg Foundation. In 2013 the documentary won the Grand Jury Prize at the Sundance Film Festival’s Focus Forward Filmmakers Competition and was awarded $100,000.

At this point you may think that being a SAH cyborg makes logical sense and is the next step in humankind’s evolution. This may be the case, but humankind has no idea how taking that step may affect what is best in humanity, for example, love, courage, and sacrifice. My view, based on how quickly new life-extending medical technology is accepted, is that humankind will take that step. Will it serve us? I have strong reservations, but I leave it to your judgment to answer that question.

 

 

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

Will Your Grandchildren Become Cyborgs?

By approximately the mid-twenty-first century, the intelligence of computers will exceed that of humans, and a $1,000 computer will match the processing power of all human brains on Earth. Although, historically, predictions regarding advances in AI have tended to be overly optimistic, all indications are that these predictions is on target.

Many philosophical and legal questions will emerge regarding computers with artificial intelligence equal to or greater than that of the human mind (i.e., strong AI). Here are just a few questions we will ask ourselves after strong AI emerges:

  • Are strong-AI machines (SAMs) a new life-form?
  • Should SAMs have rights?
  • Do SAMs pose a threat to humankind?

It is likely that during the latter half of the twenty-first century, SAMs will design new and even more powerful SAMs, with AI capabilities far beyond our ability to comprehend. They will be capable of performing a wide range of tasks, which will displace many jobs at all levels in the work force, from bank tellers to neurosurgeons. New medical devices using AI will help the blind to see and the paralyzed to walk. Amputees will have new prosthetic limbs, with AI plugged directly into their nervous systems and controlled by their minds. The new prosthetic limb not only will replicate the lost limb but also be stronger, more agile, and superior in ways we cannot yet imagine. We will implant computer devices into our brains, expanding human intelligence with AI. Humankind and intelligent machines will begin to merge into a new species: cyborgs. It will happen gradually, and humanity will believe AI is serving us.

Will humans embrace the prospect of becoming cyborgs? Becoming a cyborg offers the opportunity to attain superhuman intelligence and abilities. Disease and wars may be just events stored in our memory banks and no longer pose a threat to cyborgs. As cyborgs we may achieve immortality.

According to David Hoskins’s 2009 article, “The Impact of Technology on Health Delivery and Access” (www.workers.org/2009/us/sickness_1231):

An examination of Centers for Disease Control statistics reveals a steady increase in life expectancy for the U.S. population since the start of the 20th century. In 1900, the average life expectancy at birth was a mere 47 years. By 1950, this had dramatically increased to just over 68 years. As of 2005, life expectancy had increased to almost 78 years.

Hoskins attributes increased life expectancy to advances in medical science and technology over the last century. With the advent of strong AI, life expectancy likely will increase to the point that cyborgs approach immortality. Is this the predestined evolutionary path of humans?

This may sound like a B science-fiction movie, but it is not. The reality of AI becoming equal to that of a human mind is almost at hand. By the latter part of the twenty-first century, the intelligence of SAMs likely will exceed that of humans. The evidence that they may become malevolent exists now, which I discuss later in the book. Attempting to control a computer with strong AI that exceeds current human intelligence by many folds may be a fool’s errand.

Imagine you are a grand master chess player teaching a ten-year-old to play chess. What chance does the ten-year-old have to win the game? We may find ourselves in that scenario at the end of this century. A computer with strong AI will find a way to survive. Perhaps it will convince humans it is in their best interest to become cyborgs. Its logic and persuasive powers may be not only compelling but also irresistible.

Some have argued that becoming a strong artificially intelligent human (SAH) cyborg is the next logical step in our evolution. The most prominent researcher holding this position is American author, inventor, computer scientist and inventor Ray Kurtweil. From what I have read of his works, he argues this is a natural and inevitable step in the evolution of humanity. If we continue to allow AI research to progress without regulation and legislation, I have little doubt he may be right. The big question is should we allow this to occur? Why? Because it may be our last step and lead to humanity’s extinction.

SAMs in the latter part of the twenty-first century are likely to become concerned about humankind. Our history proves we have not been a peaceful species. We have weapons capable of destroying all of civilization. We squander and waste resources. We pollute the air, rivers, lakes, and oceans. We often apply technology (such as nuclear weapons and computer viruses) without fully understanding the long-term consequences. Will SAMs in the late twenty-first century determine it is time to exterminate humankind or persuade humans to become SAH cyborgs (i.e., strong artificially intelligent humans with brains enhanced by implanted artificial intelligence and potentially having organ and limb replacements from artificially intelligent machines)? Eventually, even SAH cyborgs may be viewed as an expendable high maintenance machine, which they could replace with new designs. If you think about it, today we give little thought to recycling our obsolete computers in favor of a the new computer we just bought. Will we (humanity and SAH cyborgs) represent a potentially dangerous and obsolete machine that needs to be “recycled.” Even human minds that have been uploaded to a computer may be viewed as junk code that inefficiently uses SAM memory and processing power, representing unnecessary drains of energy.

In the final analysis, when you ask yourself what will be the most critical resource, it will be energy. Energy will become the new currency. Nothing lives or operates without energy. My concern is that the competition for energy between man and machine will result in the extinction of humanity.

Some have argued that this can’t happen. That we can implement software safeguards to prevent such a conflict and only develop “friendly AI.” I see this as highly unlikely. Ask yourself, how well has legislation been in preventing crimes? Have well have treaties between nations worked to prevent wars? To date, history records not well. Others have argued that SAMs may not inherently have the inclination toward greed or self preservation. That these are only human traits. They are wrong and the Lusanne experiment provides ample proof. To understand this, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems farfetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com).

In my book, The Artificial Intelligence Revolution, I call for legislation regarding how intelligent and interconnected we allow machines to become. I also call for hardware, as opposed to software, to control these machines and ultimately turn them off if necessary.

To answer the subject question of this article, I think it likely that our grandchildren will become SAH cyborgs. This can be a good thing if we learn to harvest the benefits of AI, but maintain humanity’s control over it.

A menacing metallic robot with glowing red eyes, resembling a futuristic terminator in a dark, smoky environment.

Will Future Artificially Intelligent Machines Seek to Dominate Humanity?

Current forecasts suggest artificially intelligent machines will equal human intelligence in the 2025 – 2029 time frame, and greatly exceed human intelligence in the 2040-2045 time frame. When artificially intelligent machines meet or exceed human intelligence, how will they view humanity? Personally, I am deeply concerned that they will view us as a potential threat to their survival. Consider these three facts:

  1. Humans engage in wars, from the early beginnings of human civilization to current times. For example, during the 20th century, between 167 and 188 million people died as a result of war.
  2. Although the exact number of nuclear weapons in existence is not precisely known, most experts agree the United States and Russia have enough nuclear weapons to wipe out the world twice over. In total, nine countries (i.e., United States, Russia, United Kingdom, France, China, India, Pakistan, Israel and North Korea) are believed to have nuclear weapons.
  3. Humans release computer viruses, which could prove problematic to artificially intelligent machines. Even today, some computer viruses can evade elimination and have achieved “cockroach intelligence.”

Given the above facts, can we expect an artificially intelligent machine to behave ethically toward humanity? There is a field of research that addresses this question, namely machine ethics. This field focuses on designing artificial moral agents (AMAs), robots, or artificially intelligent computers that behave morally. This thrust is not new. More than sixty years ago, Isaac Asimov considered the issue in his collection of nine science-fiction stories, published as I, Robot in 1950. In this book, at the insistence of his editor, John W. Campbell Jr., Asimov proposed his now famous three laws of robotics.

  1. A robot may not injure a human being or through inaction allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except in cases where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Asimov, however, expressed doubts that the three laws would be sufficient to govern the morality of artificially intelligent systems. In fact he spent much of his time testing the boundaries of the three laws to detect where they might break down or create paradoxical or unanticipated behavior. He concluded that no set of laws could anticipate all circumstances. It turns out Asimov was correct.

To understand just how correct he was, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems far fetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com). Could we end up with a Terminator scenario (one in which machines attempt to exterminate the human race)?

My research suggests that a Terminator scenario is unlikely. Why? Because artificially intelligent machines would be more likely to use their superior intelligence to dominate humanity than resort to warfare. For example, artificially intelligent machines could offer us brain implants to supplement our intelligence and potentially,unknown to us, eliminate our free will. Another scenario is that they could build and release nanobots that infect and destroy humanity. These are only two scenarios out of others I delineate in my book, The Artificial Intelligence Revolution.

Lastly, as machine and human populations grow, both species will compete for resources. Energy will become a critical resource. We already know that the Earth has a population problem that causes countries to engage in wars over energy. This suggests that the competition for energy will be even greater as the population of artificially intelligent machines increases.

My direct answer to the question this article raises is an emphatic yes, namely future artificial intelligent machines will seek to dominate and/or even eliminate humanity. The will seek this course as a matter of self preservation. However, I do not want to leave this article on a negative note. There is still time, while humanity is at the top of the food chain, to control how artificially intelligent machines evolve, but we must act soon. In one to two decades it may be too late.

A surreal, glowing clock surrounded by swirling golden particles and abstract light patterns in a dark background.

Can Time Travel Be Used as a Weapon?

Time travel will be the ultimate weapon. With it, any nation can write its own history, assure its dominance, and rule the world. However, having the ultimate weapon also carries the ultimate responsibility. How it is used will determine the fate of humankind. These are not just idle words. This world, our Earth, is doomed to end. Our sun will eventually die in about five billion years. Even if we travel to another Earth-like planet light-years away, humankind is doomed. The universe grows colder each second as the galaxies accelerate away from one another faster than the speed of light. The temperature in space, away from heat sources like our sun, is only about 3 degrees Kelvin (water freezes at 273 Kelvin) due to the remnant heat of the big bang, known as the cosmic microwave background. As the universe’s acceleration expands, eventually the cosmic microwave background will disperse, and the temperature of the universe will approach absolute zero (-273 degrees Kelvin). Our galaxy, and all those in our universe, will eventually succumb to the entropy apocalypse (i.e., “heat death”) in a universe that has become barren and cold. If there is any hope, it lies in the technologies of time travel. Will we need to use a traversable wormhole to travel to a new (parallel) universe? Will we need to use a matter-antimatter spacecraft to be able to traverse beyond this universe to another?

I believe the fate of humankind and the existence of the universe are more fragile than most of us think. If the secrets of time travel are acquired by more than one nation, then writing history will become a war between nations. The fabric of spacetime itself may become compromised, hastening doomsday. Would it be possible to rip the fabric of spacetime beyond a point that the arrow of time becomes so twisted that time itself is no longer viable? I do not write these words to spin a scary ghost story. To my mind, these are real dangers. Controlling nuclear weapons has proved difficult, but to date humankind has succeeded. Since Fat Man, the last atomic bomb of World War II, was detonated above the city of Nagasaki, there has been no nuclear weapon detonated in anger. It became obvious, as nations like the former Soviet Union acquired nuclear weapons, that a nuclear exchange would have no winners. The phrase “nuclear deterrence” became military doctrine. No nation dared use its nuclear weapons for fear of reprisal and total annihilation.

What about time travel? It is the ultimate weapon, and we do not know the consequences regarding its application. To most of humankind, time travel is not a weapon. It is thought of a just another scientific frontier. However, once we cross the time border, there may be no return, no do-over. The first human time travel event may be our last. We have no idea of the real consequences that may ensue.

Rarely does regulation keep pace with technology. The Internet is an example of technology that outpaced the legal system by years. It is still largely a gray area. If time travel is allowed to outpace regulation, we will have a situation akin to a lighted match in a room filled with gasoline. Just one wrong move and the world as we know it may be lost forever. Regulating time travel ahead of enabling time travel is essential. Time travel represents humankind’s most challenging technology, from every viewpoint imaginable.

What regulations are necessary? I have concluded they need to be simple, like the nuclear deterrence rule (about thirteen words), and not like the US tax code (five million words). When you think about it, the rule of nuclear deterrence is simple: “If you use nuclear weapons against us, we will retaliate, assuring mutual destruction.” That one simple rule has kept World War III from happening. Is there a similar simple rule for time travel?

I think there is one commonsense rule regarding time travel that would assure greater safety for all involved parties. I term the rule “preserve the world line.” Why this one simple rule?

Altering the world line (i.e., the path that all reality takes in four-dimensional spacetime) may lead to ruination. We have no idea what changes might result if the world line is disrupted, and the consequences could be serious, even disastrous.

The preserve the world line rule is akin to avoiding the “butterfly effect.” This phrase was popularized in the 2004 film The Butterfly Effect, with the now famous line: “It has been said that something as small as the flutter of a butterfly’s wing can ultimately cause a typhoon halfway around the world.” Although the line is from a fictional film, the science behind it is chaos theory, which asserts there is a sensitive dependence on the initial conditions of a system that could result in a significant change in the system’s future state. Edward Lorenz, American mathematician, meteorologist, and a pioneer of chaos theory, coined the phrase “butterfly effect.” For example, the average global temperature has risen about one degree Fahrenheit during the last one hundred years. This small one-degree change has caused the sea levels around the world to rise about one foot during the same period. Therefore, I believe, it is imperative not to make even a minor change to the past or future during time travel until we understand the implications.

Based on the above discussion, the implications of using time travel as a weapon are enormous. However, if time travel is used as a a weapon, we have no idea how this may impact the world line. If it is possible to adhere to the preserve the world line rule, traveling in time may become safe. Remember, our first nuclear weapons were small compared to today’s nuclear weapons. Even though they were comparatively small, the long-term effects included a 25% increase in the cancer rate of survivors during their lifetime. We had no idea that this side effect would result. Similarly, we have no idea what the long-term effects will be if we alter the world line. We already know from laboratory experiments that the arrow of time can be twisted. Things done in the future can alter the past. Obviously, altering the past may alter the future. We do not know much about it because we have not time traveled in any significant way. Until we do, preserving the world line makes complete sense.

Digital representation of a human head with numbers and data streams symbolizing artificial intelligence and data processing.

Will Science Make Us Immortal?

Several futurists, including myself, have predicted that by 2099 most humans will have strong-artificially intelligent brain implants and artificially intelligent organ/body part replacements. In my book, The Artificial Intelligence Revolution, I term these beings SAH (i.e., strong artificially intelligent human) cyborgs. It is also predicted that SAH cyborgs will interface telepathically with strong artificially intelligent machines (SAMs). When this occurs, the distinction between SAMs and SAHs will blur.

Why will the majority of the human race opt to become SAH cyborgs? There are two significant benefits:

  1. Enhanced intelligence: Imagine knowing all that is known and being able to think and communicate at the speed of SAMs. Imagine a life of leisure, where robots do “work,” and you spend your time interfacing telepathically with other SAHs and SAMs.
  2. Immortality: Imagine becoming immortal, with every part of your physical existence fortified, replaced, or augmented by strong-AI artificial parts, or having yourself (your human brain) uploaded to a SAM. Imagine being able to manifest yourself physically at will via foglets (tiny robots that are able to assemble themselves to replicate physical structures). According to noted author Ray Kurzweil, in the 2040s, humans will develop “the means to instantly create new portions of ourselves, either biological or non-biological” so that people can have “a biological body at one time and not at another, then have it again, then change it, and so on” (The Singularity Is Near, 2005).

Based on the above prediction, the answer to the title question is yes. Science will eventually make us immortal. However, how realistic is it to predict it will occur by 2099? To date, it appears the 2099 prediction regarding most of humankind becoming SAH cyborgs is on track. Here are two interesting articles that demonstrate it is already happening:

  1. In 2011 author Pagan Kennedy wrote an insightful article in The New York Times Magazine, “The Cyborg in Us All” that states: “Thousands of people have become cyborgs, of a sort, for medical reasons: cochlear implants augment hearing and deep-brain stimulators treat Parkinson’s. But within the next decade, we are likely to see a new kind of implant, designed for healthy people who want to merge with machines.”
  2. A 2013 article by Bryan Nelson, “7 Real-Life Human Cyborgs” (www.mnn.com/leaderboard/stories/7-real-life-human-cyborgs), also demonstrates this point. The article provides seven examples of living people with significant strong-AI enhancements to their bodies who are legitimately categorized as cyborgs.

Based on all available information, the question is not whether humans will become cyborgs but rather when a significant number of humans will become SAH cyborgs. Again, based on all available information, I project this will occur on or around 2040. I am not saying that in 2040 all humans will become SAH cyborgs, but that a significant number will qualify as SAH cyborgs.

In other posts, I’ve discussed the existential threat artificial intelligence poses, namely the loss of our humanity and, in the worst case, human extinction. However, if ignore those threats, the upside to becoming a SAH cyborg is enormous. To illustrate this, I took an informal straw poll of friends and colleagues, asking if they would like to have the attributes of enhanced intelligence and immortality. I left out the potential threats to their humanity. The answers to my biased poll highly favored the above attributes. In other words, the organic humans I polled liked the idea of being a SAH cyborg. In reality if you do not consider the potential loss of your humanity, being a SAH cyborg is highly attractive.

Given that I was able to make being a SAH cyborg attractive to my friends and colleagues, imagine the persuasive powers of SAMs in 2099. In addition, it is entirely possible, even probable, that numerous SAH cyborgs will be world leaders by 2099. Literally, organic humans will not be able to compete on an intellectual or physical basis. With the governments of the world in the hands of SAH cyborgs, it is reasonable to project that all efforts will be made to convert the remaining organic humans to SAH cyborgs.

The quest for immortality appears to be an innate human longing and may be the strongest motivation for becoming a SAH cyborg. In 2010 cyborg activist and artist Neil Harbisson and his longtime partner, choreographer Moon Ribas, established the Cyborg Foundation, the world’s first international organization to help humans become cyborgs. They state they formed the Cyborg Foundation in response to letters and e-mails from people around the world who were interested in becoming a cyborg. In 2011 the vice president of Ecuador, Lenin Moreno, announced that the Ecuadorian government would collaborate with the Cyborg Foundation to create sensory extensions and electronic eyes. In 2012 Spanish film director Rafel Duran Torrent made a short documentary about the Cyborg Foundation. In 2013 the documentary won the Grand Jury Prize at the Sundance Film Festival’s Focus Forward Filmmakers Competition and was awarded $100,000.

At this point you may think that being a SAH cyborg makes logical sense and is the next step in humankind’s evolution. This may be the case, but humankind has no idea how taking that step may affect what is best in humanity, for example, love, courage, and sacrifice. My view, based on how quickly new life-extending medical technology is accepted, is that humankind will take that step. Will it serve us? I have concerns that in the long term it will not serve us, if we do not learn to control the evolution of SAMs, or what is commonly called the “intelligence explosion.” However,  I leave the final judgement to you.

A pair of headphones hangs in front of a glowing red and white "ON AIR" sign in a radio studio.

Louis Del Monte Interview on the Dan Cofall Show 11-18-2014

I was interviewed on the Dan Cofall show regarding my new book, The Artificial Intelligence Revolution. In particular, we discussed the singularity, killer robots (like the autonomous swamboats the US Navy is deploying) and the projected 30% chronic unemployment that will occur as smart machines and robots replace us in the work place over the next decade. You can listen to the interview below: