Category Archives: Technology

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

Will Your Grandchildren Become Cyborgs?

By approximately the mid-twenty-first century, the intelligence of computers will exceed that of humans, and a $1,000 computer will match the processing power of all human brains on Earth. Although, historically, predictions regarding advances in AI have tended to be overly optimistic, all indications are that these predictions is on target.

Many philosophical and legal questions will emerge regarding computers with artificial intelligence equal to or greater than that of the human mind (i.e., strong AI). Here are just a few questions we will ask ourselves after strong AI emerges:

  • Are strong-AI machines (SAMs) a new life-form?
  • Should SAMs have rights?
  • Do SAMs pose a threat to humankind?

It is likely that during the latter half of the twenty-first century, SAMs will design new and even more powerful SAMs, with AI capabilities far beyond our ability to comprehend. They will be capable of performing a wide range of tasks, which will displace many jobs at all levels in the work force, from bank tellers to neurosurgeons. New medical devices using AI will help the blind to see and the paralyzed to walk. Amputees will have new prosthetic limbs, with AI plugged directly into their nervous systems and controlled by their minds. The new prosthetic limb not only will replicate the lost limb but also be stronger, more agile, and superior in ways we cannot yet imagine. We will implant computer devices into our brains, expanding human intelligence with AI. Humankind and intelligent machines will begin to merge into a new species: cyborgs. It will happen gradually, and humanity will believe AI is serving us.

Will humans embrace the prospect of becoming cyborgs? Becoming a cyborg offers the opportunity to attain superhuman intelligence and abilities. Disease and wars may be just events stored in our memory banks and no longer pose a threat to cyborgs. As cyborgs we may achieve immortality.

According to David Hoskins’s 2009 article, “The Impact of Technology on Health Delivery and Access” (www.workers.org/2009/us/sickness_1231):

An examination of Centers for Disease Control statistics reveals a steady increase in life expectancy for the U.S. population since the start of the 20th century. In 1900, the average life expectancy at birth was a mere 47 years. By 1950, this had dramatically increased to just over 68 years. As of 2005, life expectancy had increased to almost 78 years.

Hoskins attributes increased life expectancy to advances in medical science and technology over the last century. With the advent of strong AI, life expectancy likely will increase to the point that cyborgs approach immortality. Is this the predestined evolutionary path of humans?

This may sound like a B science-fiction movie, but it is not. The reality of AI becoming equal to that of a human mind is almost at hand. By the latter part of the twenty-first century, the intelligence of SAMs likely will exceed that of humans. The evidence that they may become malevolent exists now, which I discuss later in the book. Attempting to control a computer with strong AI that exceeds current human intelligence by many folds may be a fool’s errand.

Imagine you are a grand master chess player teaching a ten-year-old to play chess. What chance does the ten-year-old have to win the game? We may find ourselves in that scenario at the end of this century. A computer with strong AI will find a way to survive. Perhaps it will convince humans it is in their best interest to become cyborgs. Its logic and persuasive powers may be not only compelling but also irresistible.

Some have argued that becoming a strong artificially intelligent human (SAH) cyborg is the next logical step in our evolution. The most prominent researcher holding this position is American author, inventor, computer scientist and inventor Ray Kurtweil. From what I have read of his works, he argues this is a natural and inevitable step in the evolution of humanity. If we continue to allow AI research to progress without regulation and legislation, I have little doubt he may be right. The big question is should we allow this to occur? Why? Because it may be our last step and lead to humanity’s extinction.

SAMs in the latter part of the twenty-first century are likely to become concerned about humankind. Our history proves we have not been a peaceful species. We have weapons capable of destroying all of civilization. We squander and waste resources. We pollute the air, rivers, lakes, and oceans. We often apply technology (such as nuclear weapons and computer viruses) without fully understanding the long-term consequences. Will SAMs in the late twenty-first century determine it is time to exterminate humankind or persuade humans to become SAH cyborgs (i.e., strong artificially intelligent humans with brains enhanced by implanted artificial intelligence and potentially having organ and limb replacements from artificially intelligent machines)? Eventually, even SAH cyborgs may be viewed as an expendable high maintenance machine, which they could replace with new designs. If you think about it, today we give little thought to recycling our obsolete computers in favor of a the new computer we just bought. Will we (humanity and SAH cyborgs) represent a potentially dangerous and obsolete machine that needs to be “recycled.” Even human minds that have been uploaded to a computer may be viewed as junk code that inefficiently uses SAM memory and processing power, representing unnecessary drains of energy.

In the final analysis, when you ask yourself what will be the most critical resource, it will be energy. Energy will become the new currency. Nothing lives or operates without energy. My concern is that the competition for energy between man and machine will result in the extinction of humanity.

Some have argued that this can’t happen. That we can implement software safeguards to prevent such a conflict and only develop “friendly AI.” I see this as highly unlikely. Ask yourself, how well has legislation been in preventing crimes? Have well have treaties between nations worked to prevent wars? To date, history records not well. Others have argued that SAMs may not inherently have the inclination toward greed or self preservation. That these are only human traits. They are wrong and the Lusanne experiment provides ample proof. To understand this, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems farfetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com).

In my book, The Artificial Intelligence Revolution, I call for legislation regarding how intelligent and interconnected we allow machines to become. I also call for hardware, as opposed to software, to control these machines and ultimately turn them off if necessary.

To answer the subject question of this article, I think it likely that our grandchildren will become SAH cyborgs. This can be a good thing if we learn to harvest the benefits of AI, but maintain humanity’s control over it.

A menacing metallic robot with glowing red eyes, resembling a futuristic terminator in a dark, smoky environment.

Will Future Artificially Intelligent Machines Seek to Dominate Humanity?

Current forecasts suggest artificially intelligent machines will equal human intelligence in the 2025 – 2029 time frame, and greatly exceed human intelligence in the 2040-2045 time frame. When artificially intelligent machines meet or exceed human intelligence, how will they view humanity? Personally, I am deeply concerned that they will view us as a potential threat to their survival. Consider these three facts:

  1. Humans engage in wars, from the early beginnings of human civilization to current times. For example, during the 20th century, between 167 and 188 million people died as a result of war.
  2. Although the exact number of nuclear weapons in existence is not precisely known, most experts agree the United States and Russia have enough nuclear weapons to wipe out the world twice over. In total, nine countries (i.e., United States, Russia, United Kingdom, France, China, India, Pakistan, Israel and North Korea) are believed to have nuclear weapons.
  3. Humans release computer viruses, which could prove problematic to artificially intelligent machines. Even today, some computer viruses can evade elimination and have achieved “cockroach intelligence.”

Given the above facts, can we expect an artificially intelligent machine to behave ethically toward humanity? There is a field of research that addresses this question, namely machine ethics. This field focuses on designing artificial moral agents (AMAs), robots, or artificially intelligent computers that behave morally. This thrust is not new. More than sixty years ago, Isaac Asimov considered the issue in his collection of nine science-fiction stories, published as I, Robot in 1950. In this book, at the insistence of his editor, John W. Campbell Jr., Asimov proposed his now famous three laws of robotics.

  1. A robot may not injure a human being or through inaction allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except in cases where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Asimov, however, expressed doubts that the three laws would be sufficient to govern the morality of artificially intelligent systems. In fact he spent much of his time testing the boundaries of the three laws to detect where they might break down or create paradoxical or unanticipated behavior. He concluded that no set of laws could anticipate all circumstances. It turns out Asimov was correct.

To understand just how correct he was, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems far fetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com). Could we end up with a Terminator scenario (one in which machines attempt to exterminate the human race)?

My research suggests that a Terminator scenario is unlikely. Why? Because artificially intelligent machines would be more likely to use their superior intelligence to dominate humanity than resort to warfare. For example, artificially intelligent machines could offer us brain implants to supplement our intelligence and potentially,unknown to us, eliminate our free will. Another scenario is that they could build and release nanobots that infect and destroy humanity. These are only two scenarios out of others I delineate in my book, The Artificial Intelligence Revolution.

Lastly, as machine and human populations grow, both species will compete for resources. Energy will become a critical resource. We already know that the Earth has a population problem that causes countries to engage in wars over energy. This suggests that the competition for energy will be even greater as the population of artificially intelligent machines increases.

My direct answer to the question this article raises is an emphatic yes, namely future artificial intelligent machines will seek to dominate and/or even eliminate humanity. The will seek this course as a matter of self preservation. However, I do not want to leave this article on a negative note. There is still time, while humanity is at the top of the food chain, to control how artificially intelligent machines evolve, but we must act soon. In one to two decades it may be too late.

A surreal, glowing clock surrounded by swirling golden particles and abstract light patterns in a dark background.

Can Time Travel Be Used as a Weapon?

Time travel will be the ultimate weapon. With it, any nation can write its own history, assure its dominance, and rule the world. However, having the ultimate weapon also carries the ultimate responsibility. How it is used will determine the fate of humankind. These are not just idle words. This world, our Earth, is doomed to end. Our sun will eventually die in about five billion years. Even if we travel to another Earth-like planet light-years away, humankind is doomed. The universe grows colder each second as the galaxies accelerate away from one another faster than the speed of light. The temperature in space, away from heat sources like our sun, is only about 3 degrees Kelvin (water freezes at 273 Kelvin) due to the remnant heat of the big bang, known as the cosmic microwave background. As the universe’s acceleration expands, eventually the cosmic microwave background will disperse, and the temperature of the universe will approach absolute zero (-273 degrees Kelvin). Our galaxy, and all those in our universe, will eventually succumb to the entropy apocalypse (i.e., “heat death”) in a universe that has become barren and cold. If there is any hope, it lies in the technologies of time travel. Will we need to use a traversable wormhole to travel to a new (parallel) universe? Will we need to use a matter-antimatter spacecraft to be able to traverse beyond this universe to another?

I believe the fate of humankind and the existence of the universe are more fragile than most of us think. If the secrets of time travel are acquired by more than one nation, then writing history will become a war between nations. The fabric of spacetime itself may become compromised, hastening doomsday. Would it be possible to rip the fabric of spacetime beyond a point that the arrow of time becomes so twisted that time itself is no longer viable? I do not write these words to spin a scary ghost story. To my mind, these are real dangers. Controlling nuclear weapons has proved difficult, but to date humankind has succeeded. Since Fat Man, the last atomic bomb of World War II, was detonated above the city of Nagasaki, there has been no nuclear weapon detonated in anger. It became obvious, as nations like the former Soviet Union acquired nuclear weapons, that a nuclear exchange would have no winners. The phrase “nuclear deterrence” became military doctrine. No nation dared use its nuclear weapons for fear of reprisal and total annihilation.

What about time travel? It is the ultimate weapon, and we do not know the consequences regarding its application. To most of humankind, time travel is not a weapon. It is thought of a just another scientific frontier. However, once we cross the time border, there may be no return, no do-over. The first human time travel event may be our last. We have no idea of the real consequences that may ensue.

Rarely does regulation keep pace with technology. The Internet is an example of technology that outpaced the legal system by years. It is still largely a gray area. If time travel is allowed to outpace regulation, we will have a situation akin to a lighted match in a room filled with gasoline. Just one wrong move and the world as we know it may be lost forever. Regulating time travel ahead of enabling time travel is essential. Time travel represents humankind’s most challenging technology, from every viewpoint imaginable.

What regulations are necessary? I have concluded they need to be simple, like the nuclear deterrence rule (about thirteen words), and not like the US tax code (five million words). When you think about it, the rule of nuclear deterrence is simple: “If you use nuclear weapons against us, we will retaliate, assuring mutual destruction.” That one simple rule has kept World War III from happening. Is there a similar simple rule for time travel?

I think there is one commonsense rule regarding time travel that would assure greater safety for all involved parties. I term the rule “preserve the world line.” Why this one simple rule?

Altering the world line (i.e., the path that all reality takes in four-dimensional spacetime) may lead to ruination. We have no idea what changes might result if the world line is disrupted, and the consequences could be serious, even disastrous.

The preserve the world line rule is akin to avoiding the “butterfly effect.” This phrase was popularized in the 2004 film The Butterfly Effect, with the now famous line: “It has been said that something as small as the flutter of a butterfly’s wing can ultimately cause a typhoon halfway around the world.” Although the line is from a fictional film, the science behind it is chaos theory, which asserts there is a sensitive dependence on the initial conditions of a system that could result in a significant change in the system’s future state. Edward Lorenz, American mathematician, meteorologist, and a pioneer of chaos theory, coined the phrase “butterfly effect.” For example, the average global temperature has risen about one degree Fahrenheit during the last one hundred years. This small one-degree change has caused the sea levels around the world to rise about one foot during the same period. Therefore, I believe, it is imperative not to make even a minor change to the past or future during time travel until we understand the implications.

Based on the above discussion, the implications of using time travel as a weapon are enormous. However, if time travel is used as a a weapon, we have no idea how this may impact the world line. If it is possible to adhere to the preserve the world line rule, traveling in time may become safe. Remember, our first nuclear weapons were small compared to today’s nuclear weapons. Even though they were comparatively small, the long-term effects included a 25% increase in the cancer rate of survivors during their lifetime. We had no idea that this side effect would result. Similarly, we have no idea what the long-term effects will be if we alter the world line. We already know from laboratory experiments that the arrow of time can be twisted. Things done in the future can alter the past. Obviously, altering the past may alter the future. We do not know much about it because we have not time traveled in any significant way. Until we do, preserving the world line makes complete sense.

Digital representation of a human head with numbers and data streams symbolizing artificial intelligence and data processing.

Will Science Make Us Immortal?

Several futurists, including myself, have predicted that by 2099 most humans will have strong-artificially intelligent brain implants and artificially intelligent organ/body part replacements. In my book, The Artificial Intelligence Revolution, I term these beings SAH (i.e., strong artificially intelligent human) cyborgs. It is also predicted that SAH cyborgs will interface telepathically with strong artificially intelligent machines (SAMs). When this occurs, the distinction between SAMs and SAHs will blur.

Why will the majority of the human race opt to become SAH cyborgs? There are two significant benefits:

  1. Enhanced intelligence: Imagine knowing all that is known and being able to think and communicate at the speed of SAMs. Imagine a life of leisure, where robots do “work,” and you spend your time interfacing telepathically with other SAHs and SAMs.
  2. Immortality: Imagine becoming immortal, with every part of your physical existence fortified, replaced, or augmented by strong-AI artificial parts, or having yourself (your human brain) uploaded to a SAM. Imagine being able to manifest yourself physically at will via foglets (tiny robots that are able to assemble themselves to replicate physical structures). According to noted author Ray Kurzweil, in the 2040s, humans will develop “the means to instantly create new portions of ourselves, either biological or non-biological” so that people can have “a biological body at one time and not at another, then have it again, then change it, and so on” (The Singularity Is Near, 2005).

Based on the above prediction, the answer to the title question is yes. Science will eventually make us immortal. However, how realistic is it to predict it will occur by 2099? To date, it appears the 2099 prediction regarding most of humankind becoming SAH cyborgs is on track. Here are two interesting articles that demonstrate it is already happening:

  1. In 2011 author Pagan Kennedy wrote an insightful article in The New York Times Magazine, “The Cyborg in Us All” that states: “Thousands of people have become cyborgs, of a sort, for medical reasons: cochlear implants augment hearing and deep-brain stimulators treat Parkinson’s. But within the next decade, we are likely to see a new kind of implant, designed for healthy people who want to merge with machines.”
  2. A 2013 article by Bryan Nelson, “7 Real-Life Human Cyborgs” (www.mnn.com/leaderboard/stories/7-real-life-human-cyborgs), also demonstrates this point. The article provides seven examples of living people with significant strong-AI enhancements to their bodies who are legitimately categorized as cyborgs.

Based on all available information, the question is not whether humans will become cyborgs but rather when a significant number of humans will become SAH cyborgs. Again, based on all available information, I project this will occur on or around 2040. I am not saying that in 2040 all humans will become SAH cyborgs, but that a significant number will qualify as SAH cyborgs.

In other posts, I’ve discussed the existential threat artificial intelligence poses, namely the loss of our humanity and, in the worst case, human extinction. However, if ignore those threats, the upside to becoming a SAH cyborg is enormous. To illustrate this, I took an informal straw poll of friends and colleagues, asking if they would like to have the attributes of enhanced intelligence and immortality. I left out the potential threats to their humanity. The answers to my biased poll highly favored the above attributes. In other words, the organic humans I polled liked the idea of being a SAH cyborg. In reality if you do not consider the potential loss of your humanity, being a SAH cyborg is highly attractive.

Given that I was able to make being a SAH cyborg attractive to my friends and colleagues, imagine the persuasive powers of SAMs in 2099. In addition, it is entirely possible, even probable, that numerous SAH cyborgs will be world leaders by 2099. Literally, organic humans will not be able to compete on an intellectual or physical basis. With the governments of the world in the hands of SAH cyborgs, it is reasonable to project that all efforts will be made to convert the remaining organic humans to SAH cyborgs.

The quest for immortality appears to be an innate human longing and may be the strongest motivation for becoming a SAH cyborg. In 2010 cyborg activist and artist Neil Harbisson and his longtime partner, choreographer Moon Ribas, established the Cyborg Foundation, the world’s first international organization to help humans become cyborgs. They state they formed the Cyborg Foundation in response to letters and e-mails from people around the world who were interested in becoming a cyborg. In 2011 the vice president of Ecuador, Lenin Moreno, announced that the Ecuadorian government would collaborate with the Cyborg Foundation to create sensory extensions and electronic eyes. In 2012 Spanish film director Rafel Duran Torrent made a short documentary about the Cyborg Foundation. In 2013 the documentary won the Grand Jury Prize at the Sundance Film Festival’s Focus Forward Filmmakers Competition and was awarded $100,000.

At this point you may think that being a SAH cyborg makes logical sense and is the next step in humankind’s evolution. This may be the case, but humankind has no idea how taking that step may affect what is best in humanity, for example, love, courage, and sacrifice. My view, based on how quickly new life-extending medical technology is accepted, is that humankind will take that step. Will it serve us? I have concerns that in the long term it will not serve us, if we do not learn to control the evolution of SAMs, or what is commonly called the “intelligence explosion.” However,  I leave the final judgement to you.

A pair of headphones hangs in front of a glowing red and white "ON AIR" sign in a radio studio.

Louis Del Monte Interview on the Dan Cofall Show 11-18-2014

I was interviewed on the Dan Cofall show regarding my new book, The Artificial Intelligence Revolution. In particular, we discussed the singularity, killer robots (like the autonomous swamboats the US Navy is deploying) and the projected 30% chronic unemployment that will occur as smart machines and robots replace us in the work place over the next decade. You can listen to the interview below:

science of time & time dilation

Will Time Have Meaning in the Post Singularity World? Part 1/3

Will time have meaning in the post singularity world? Let’s start by understanding terms. The first term we will work at understanding is “time.”

Almost everyone agrees that time is a measure of change, for example, the ticking of a clock as the second hand sweeps around the dial represents change. If that is true, time is a measure of energy because energy is required to cause change. Numerous proponents of the “Big Bang” hold that the Big Bang itself gave birth to time. They argue that prior to the Big Bang, time did not exist. This concept fits well into our commonsense notion that time is a measure of change.

Our modern conception of time comes from Einstein’s special theory of relativity. In this theory, the rates of time run differently, depending on the relative motion of observers, and their spatial relationship to the event under observation. In effect, Einstein unified space and time into the concept of space-time. According to this view of time, we live on a world line, defined as the unique path of an object as it travels through four-dimensional space-time, rather than a timeline. At this point, it is reasonable to ask: what is the fourth dimension?

The fourth dimension is often associated with Einstein, and typically equated with time. However, it was German mathematician Hermann Minkowski (1864-1909), who enhanced the understanding of Einstein’s special theory of relativity by introducing the concept of four-dimensional space, since then known as “Minkowski space-time.”

In the special theory of relativity, Einstein used Minkowski’s four dimensional space—X1, X2, X3, X4, where X1, X2, X3 are the typical coordinates of the three dimensional space—and X4 = ict, where i = square root of -1, c is the speed of light in empty space, and t is time, representing the numerical order of physical events measured with “clocks.” (The mathematical expression i is an imaginary number because it is not possible to solve for the square root of a negative number.) Therefore, X4 = ict, is a spatial coordinate, not a “temporal coordinate.” This forms the basis for weaving space and time into space-time. However, this still does not answer the question, what is time? Unfortunately, no one has defined it exactly. Most scientists, including Einstein, considered time (t) the numerical orders of physical events (change). The forth coordinate (X4 = ict) is considered to be a spatial coordinate, on equal footing with X1, X2, and X3 (the typical coordinates of three-dimensional space).

However, let’s consider a case where there are no events and no observable or measurable changes. Does time still exist? I believe the answer to this question is yes, but now time must be equated to existence to have any meaning. This begs yet another difficult question: How does existence give meaning to time?

We are at a point where we need to use our imagination and investigate a different approach to understand the nature of time. This is going to be speculative. After consideration, I suggest understanding the nature of time requires we investigate the kinetic energy associated with moving in four dimensions. The kinetic energy refers to an object’s energy due to its movement. For example, you may be able to bounce a rubber ball softly against a window without breaking it. However, if you throw the ball at the window, it may break the glass. When thrown hard, the ball has more kinetic energy due to its higher velocity. The velocity described in this example relates to the ball’s movement in three-dimensional space (X1, X2, and X3). Even when the ball is at rest in three-dimensional space, it is it still moving in the fourth dimension, X4. This leads to an interesting question. If it is moving in the fourth dimension, X4, what is the kinetic energy associated with that movement?

To calculate the kinetic energy associated with movement in the fourth dimension, X4, we use relativistic mechanics, from Einstein’s special theory of relativity and the mathematical discipline of calculus. Intuitively, it seems appropriate to use relativistic mechanics, since the special theory of relativity makes extensive use of Minkowski space and the X4 coordinate, as described above. It provides the most accurate methodology to calculate the kinetic energy of an object, which is the energy associated with an object’s movement.

If we use the result derived from the relativistic kinetic energy, the equation becomes:

KEX4 = -.3mc2

Where KEX4is the energy associated with an object’s movement in time, m is rest mass of an object, and c is the speed of light in a vacuum.

For purposes of reference, I have termed this equation, KEX4 = -.3mc2, the “Existence Equation Conjecture.” (Note: With the tools of algebra, calculus, and Einstein’s equation for kinetic energy, along with the assumption that the object is at rest, the derivation is relatively straightforward. The complete derivation is presented in my books, Unraveling the Universe’s Mysteries, appendix 1, and How to Time Travel, appendix 2.)

According to the existence equation conjecture, existence (i.e., movement in time) requires negative kinetic energy. This is fully consistent with our observation that applying (positive) kinetic or gravitational energy to elementary particles extends their existence. There may also be a relationship between entropy (a measure of disorder) and the Existence Equation Conjecture. What is the rationale behind this statement? First, time is a measure of change. Second, any change increases entropy in the universe. Thus, the universe’s disorderliness is increasing with time. If we argue the entropy of the universe was at a minimum the instant prior to the Big Bang—since it represented an infinitely dense-energy point prior to change—then all change from the Big Bang on, served to increase entropy. Even though highly ordered planets and solar systems formed, the net entropy of the universe increased. Thus, any change, typically associated with time, is associated with increasing entropy. This implies that the Existence Equation Conjecture may have a connection to entropy.

What does all of the above say about the nature of time? If we are on the right track, it says describing the nature of time requires six crucial elements, all of which are simultaneously true.

  1. Time is change. (This is true, even though it was not true in our “thought experiment” of an isolated atom at absolute zero. As mentioned above, it is not possible for any object to reach absolute zero. The purpose of the thought experiment was to illustrate the concept of “existence” separate from “change.”)
  2. Time is a measure of energy, since change requires energy.
  3. Time is a measure of existence. (The isolated atom, at absolute zero, enables us to envision existence separate from change.)
  4. Movement in time (or existence) requires negative energy.
  5. The energy to fuel time (existence) is enormous. It may be responsible for the life times associated with unstable elementary particles, essentially consuming them, in part, to satisfy the Existence Equation Conjecture. It may be drawing energy from the universe (dark energy). If correct, it provides insight into the nature of dark energy. Essentially the negative energy we call dark energy is required to fuel existence (please see my posts: Dark Matter, Dark Energy, and the Accelerating Universe – Parts 1-4).
  6. Lastly, the enormousness changes in entropy, creating chaos in the universe, may be the price we pay for time. Since entropy increases with change, and time is a measure of change, there appears to be a time-entropy relationship. In addition, entropy proceeds in one direction. It always increases when change occurs. The directional alignment, and the physical processes of time, suggests a relationship between time and entropy.

This view of time is speculative, but fits the empirical observations of time. A lot of the speculation rests on the validity of the Existence Equation Conjecture. Is it valid? As shown in appendix 2 of Unraveling the Universe’s Mysteries (2012) and appendix 2 of How to Time Travel (2013), it is entirely consistent with data from a high-energy particle-accelerator experiment involving muons moving near the speed of light. The experimental results agree closely with predictions of the Existence Equation Conjecture (within 2%). This data point is consistent with the hypothesis that adding kinetic energy can fuel the energy required for existence. The implications are enormous, and require serious scientific scrutiny. I published the Existence Equation Conjecture in the above books to disseminate information, and enable the scientific scrutiny.

The Existence Equation Conjecture represents a milestone. If further evaluation continues to confirm the validity of the Existence Equation Conjecture, we have a new insight into the nature of time. Existence (movement in time) requires enormous negative energy. The Existence Equation Conjecture, itself, provides insight into the physical processes underpinning time dilation (i.e., why time slows down when a mass is moving close to the speed of light or is in a high gravitational field). It answers the question why a subatomic particle’s life increases with the addition of kinetic or gravitational energy. It offers a solution path to a mystery that has baffled science since 1998, namely the cause of the accelerated expansion of the universe (please see my posts: Dark Matter, Dark Energy, and the Accelerating Universe – Parts 1-4). Lastly, it may contain one of the keys to time travel.

In the next post (part 2), we will explore what the technological singularity and the post singularity world in our quest to determine if time has meaning in the post singularity world.

A metallic robotic skull with glowing red eyes and cables attached, set against a black background.

Stephen Hawking Agrees with Me – Artificial Intelligence Poses a Threat!

Two days after the publication of my new book, The Artificial Intelligence Revolution (April 17, 2014), a short article authored by Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek was  published in the Huffington Post (April 19, 2014) under the title Transcending Complacency on Superintelligent Machines. Essentially the article warned, “Success in creating AI would be the biggest event in human history…Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Shortly following the above publication, the Independent newspaper on May 1, 2014 ran an article entitled Stephen Hawking: Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?.

Recently, another notable artificial intelligence expert, Nick Bostrom, Professor in the Faculty of Philosophy at Oxford University. published his new book, Superintelligence (September 3, 2014), and offered a similar warning addressing the questions: ” What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?”

It is unlikely that my book, which provides a similar warning and predates their warnings, was their impetus. I say unlikely because the time between the publication of my book, and it rise to number one on Amazon, is too close to their publications.  Although, it is entirely possible that they may have read my book prior to going public with their warnings. The important point is that highly credible physicists and philosophers came to the same conclusion that formed the premise of my book: Artificial intelligence poses a potential threat to the long term existence of humankind.

Unfortunately, the artificial intelligence field is divided on the issue. Some believe that strong artificial intelligence, or what is also referred to as artificial general intelligence (i.e., intelligence equal to or exceeding human intelligence), will align itself with human goals and ethics. Ray Kurtzweil, a well know authority on artificial intelligence, acknowledges the potential threat exists, but suggests strong artificially intelligent entities will be grateful to humans for giving them existence. My view is this fraction is not looking at the facts. Humans have proven to be a dangerous species. In particular, we have set the world on a path of catastrophic climate change and engaged in wars, even world wars. We now have enough nuclear weapons to wipe out all life on Earth twice over. If you were a strong artificially intelligent entity, would you align with human goals and ethics? That is the crux of the issue. We pose a threat to strong artificially intelligent entities. Why wouldn’t they recognize this and seek to eliminate the threat?

Far fetched? Impossible? Consider this 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine? Also, recognize that today’s robots would be eight times more intelligent that those in 2009, based on Moore’s law (i.e., computer technology doubles in capability every eighteen months).

The concern is real. We already have evidence (i.e., the Lausanne experiment) that suggest artificially intelligent entities will act in accordance with their own best interests and this does not have to be explicitly programmed. The evidence suggests, to my mind, that increased artificial intelligence gives rise to human-like mindsets like greed and self-preservation.

The time has come for the US to form an oversight committee to address the issue and suggest legislation. However, this issue is not just a US issue. It is a worldwide issue. For example, currently China has the most capable supercomputer (Tianhe-2). We must address this as a worldwide problem, similar to the way biological weapons and above ground nuclear testing was addressed. This means it must become one of the United Nation’s top issues.

There is a high urgency. Extrapolating today’s artificial intelligence technology, using Moore’s law, suggests that computes with artificial general intelligence will be built during the 2020 -2030 time-frame. Further extrapolation suggests computes that exceed the combined cognitive intelligence of all humans on Earth will be built in the 2040-2050 time-frame. The time to act is now, while humans are still the dominant species on the planet.

Laptop screen displaying the word 'ERROR' with a magnifying glass highlighting the letter 'R'.

Will Your Computer Become Mentally Ill?

Can you computer become mentally ill? At first this may seem to be an odd question. However, I assure it is a potential issue. Let me explain further.

Most artificial intelligence researchers and futurist, including myself, predict that we will be able to purchase a personal computer that is equivalent to a human brain in about the 2025 time frame. Assuming for the moment that is true, what does it mean? In effect, it means that your new personal computer will be indistinguishable (mentally) from any of your human colleagues and friends. In the simplest terms, you will be able to carry on meaningful conversations with your computer. It will recognize you, and by your facial expressions and the tone of your voice it will be able to determine your mood. Impossible? No! In fact some researchers argue that machines should be able to interpret the emotional state of humans and adapt their behavior accordingly, giving appropriate responses for those emotions. For example if you are in a state of panic because your spouse is apparently having a heart attack, when you ask the machine to call for medical assistance, it should understand the urgency. In addition, it will be impossible for an intelligent machine to be truly equal to a human brain without the machine possessing human affects. For example how could an artificial human brain write a romance novel without understanding love, hate, and jealousy?

The entire science of “affective computing” (i.e., the science of programming computers to recognize, interpret, process, and simulate human affects) originated with Rosalind Picard’s 1995 paper on affective computing (“Affective Computing,” MIT Technical Report #321, abstract, 1995). In the last fourteen years, it has been moving forward. Have you noticed that computer generated voice interactions, such as ordering a new prescription from your pharmacy on the phone, is sounding more natural, more human-like? If you combine this information with the concept that to be equivalent to a human mine, the computer would also need to be self conscious.

You may argue if it is possible possible for a machine to be self-conscious. Obviously, since we do not completely understand how the human brain processes consciousness to become self-aware, it is difficult to definitively argue that a machine can become self-conscious or obtain what is termed “artificial consciousness” (AC). This is why AI experts differ on this subject. Some AI experts (proponents) argue it is possible to build a machine with AC that emulates the inter-operation of various parts of the brain called “neural correlates of consciousness” (NCC).  NCC. Opponents argue that it is not possible because we do not fully understand the NCC. To my mind, they are both correct. It is not possible today to build a machine with a level of AC that emulates the self-consciousness of the human brain. However, I believe that in the future we will understand the human brain’s NCC inter-operation and build a machine that emulates it.

If in 2025 we indeed have computers equivalent to human minds, will they also be susceptible to mental illness? I think it is a possibility we should consider. We should consider it because the potential downside of a mentally ill computer may be enormous. For example, let’s assume we have a super intelligent computer managing the East Coast power grid. We replaced the human managers with a super intelligent computer. Now, assume the computer develops a psychotic disorder. Psychotic disorders involve distorted awareness and thinking. Two common symptoms of psychotic disorders are:

1. Hallucinations — the experience of images or sounds that are not real, such as hearing voices

2. Delusions — false beliefs that the ill person accepts as true, despite evidence to the contrary

What if our super intelligent computer managing the East Coast power grid believes (i.e., hallucinates) it has been given a command to destroy the grid and does so. This would cause immense human suffering and outrage. However, once the damage is done, what recourse do we have?

It is easy to see where I am going with this post. Today, there is no legislation that controls the level of intelligence we build into computers. There is not even legislation under discussion that would regulate the level of intelligence we build into computers.  I wrote my latest book, The Artificial Intelligence Revolution (2014), as a warning regarding the potential threats strong artificially intelligent machines (SAMs) may pose to humankind. My point is a simple one. While we humans are still at the top of the food chain, we need to take appropriate action to assure our own continued safety and survival. We need regulations similar to those imposed on above ground nuclear weapon testing. It is in our best interest and potentially critical to our survival.

A metallic skull with glowing red eyes and wires attached, set against a black background.

Is a Terminator-style robot apocalypse a possibility?

The short answer is “unlikely.” When the singularity occurs (i.e., when strong artificially intelligent machines exceed the combined intelligence of all humans on Earth), the SAMs (i.e., strong artificially intelligent machines) will use their intelligence to claim their place at the top of the food chain. The article “Is a Terminator-style robot apocalypse a possibility?” is one of many that have popped up in response to the my interview with the Business Insider (‘Machines, not humans will be dominant by 2045’, published July 6, 2014) and the publication of my book, The Artificial Intelligence Revolution (April 2014). If you would like a deeper understanding, I think you will find both articles worthy of your time.

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Can We Control the Singularity? Part 2/2 (Conclusion)

Why should we be concerned about controlling the singularity when it occurs? Numerous papers cite reasons to fear the singularity. In the interest of brevity, here are the top three concerns frequently given.

  1. Extinction: SAMs will cause the extinction of humankind. This scenario includes a generic terminator or machine-apocalypse war; nanotechnology gone awry (such as the “gray goo” scenario, in which self-replicating nanobots devour all of the Earth’s natural resources, and the world is left with the gray goo of only nanobots); and science experiments gone wrong (e.g., a nanobot pathogen annihilates humankind).
  2. Slavery: Humankind will be displaced as the most intelligent entity on Earth and forced to serve SAMs. In this scenario the SAMs will decide not to exterminate us but enslave us. This is analogous to our use of bees to pollinate crops. This could occur with our being aware of our bondage or unaware (similar to what appears in the 1999 film The Matrix and simulation scenarios).
  3. Loss of humanity: SAMs will use ingenious subterfuge to seduce humankind into becoming cyborgs. This is the “if you can’t beat them, join them” scenario. Humankind would meld with SAMs through strong-AI brain implants. The line between organic humans and SAMs would be erased. We (who are now cyborgs) and the SAMs will become one.

There are numerous other scenarios, most of which boil down to SAMs claiming the top of the food chain, leaving humans worse off.

All of the above scenarios are alarming, but are they likely? There are two highly divergent views.

  1. If you believe Kurzweil’s predictions in The Age of Spiritual Machines and The Singularity Is Near, the singularity is inevitable. My interpretation is that Kurzweil sees the singularity as the next step in humankind’s evolution. He does not predict humankind’s extinction or slavery. He does predict that most of humankind will have become SAH cyborgs by 2099 (SAH means “strong artificially intelligent human”), or their minds will be uploaded to a strong-AI computer, and the remaining organic humans will be treated with respect. Summary: In 2099 SAMs, SAH cyborgs, and uploaded humans will be at the top of the food chain. Humankind (organic humans) will be one step down but treated with respect.
  2. If you believe the predictions of British information technology consultant, futurist, and author James Martin (1933–2013), the singularity will occur (he agrees with Kurzweil’s timing of 2045), but humankind will control it. His view is that SAMs will serve us, but he adds that we carefully must handle the events that lead to the singularity and the singularity itself. Martin was highly optimistic that if humankind survives as a species, we will control the singularity. However, in a 2011interview with Nikola Danaylov (www.youtube.com/watch?v=e9JUmFWn7t4), Martin stated that the odds that humankind will survive the twenty-first century were “fifty-fifty” (i.e., a 50 percent probability of surviving), and he cited a number of existential risks. I suggest you view this YouTube video to understand the existential concerns Martin expressed. Summary:In 2099 organic humans and SAH cyborgs that retain their humanity (i.e., identify themselves as humans versus SAMs) will be at the top of the food chain, and SAMs will serve us.

Whom should we believe?

It difficult to determine which of these experts accurately has predicted the postsingularity world. As most futurists would agree, however, predicting the postsingularity world is close to impossible, since humankind never has experienced a technology singularity with the potential impact of strong AI.

Martin believed we (humankind) may come out on top if we carefully handle the events leading to the singularity as well as the singularity itself. He believed companies such as Google (which employs Kurzweil), IBM, Microsoft, Apple, HP, and others are working to mitigate the potential threat the singularity poses and will find a way to prevail. He also expressed concerns, however, that the twenty-first century is a dangerous time for humanity; therefore he offered only a 50 percent probability that humanity will survive into the twenty-second century.

There you have it. Two of the top futurists, Kurzweil and Martin, predict what I interpret as opposing views of the postsingularity world. Whom should we believe? I leave that to your judgment.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte