Category Archives: Life

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

Will Your Grandchildren Become Cyborgs?

By approximately the mid-twenty-first century, the intelligence of computers will exceed that of humans, and a $1,000 computer will match the processing power of all human brains on Earth. Although, historically, predictions regarding advances in AI have tended to be overly optimistic, all indications are that these predictions is on target.

Many philosophical and legal questions will emerge regarding computers with artificial intelligence equal to or greater than that of the human mind (i.e., strong AI). Here are just a few questions we will ask ourselves after strong AI emerges:

  • Are strong-AI machines (SAMs) a new life-form?
  • Should SAMs have rights?
  • Do SAMs pose a threat to humankind?

It is likely that during the latter half of the twenty-first century, SAMs will design new and even more powerful SAMs, with AI capabilities far beyond our ability to comprehend. They will be capable of performing a wide range of tasks, which will displace many jobs at all levels in the work force, from bank tellers to neurosurgeons. New medical devices using AI will help the blind to see and the paralyzed to walk. Amputees will have new prosthetic limbs, with AI plugged directly into their nervous systems and controlled by their minds. The new prosthetic limb not only will replicate the lost limb but also be stronger, more agile, and superior in ways we cannot yet imagine. We will implant computer devices into our brains, expanding human intelligence with AI. Humankind and intelligent machines will begin to merge into a new species: cyborgs. It will happen gradually, and humanity will believe AI is serving us.

Will humans embrace the prospect of becoming cyborgs? Becoming a cyborg offers the opportunity to attain superhuman intelligence and abilities. Disease and wars may be just events stored in our memory banks and no longer pose a threat to cyborgs. As cyborgs we may achieve immortality.

According to David Hoskins’s 2009 article, “The Impact of Technology on Health Delivery and Access” (www.workers.org/2009/us/sickness_1231):

An examination of Centers for Disease Control statistics reveals a steady increase in life expectancy for the U.S. population since the start of the 20th century. In 1900, the average life expectancy at birth was a mere 47 years. By 1950, this had dramatically increased to just over 68 years. As of 2005, life expectancy had increased to almost 78 years.

Hoskins attributes increased life expectancy to advances in medical science and technology over the last century. With the advent of strong AI, life expectancy likely will increase to the point that cyborgs approach immortality. Is this the predestined evolutionary path of humans?

This may sound like a B science-fiction movie, but it is not. The reality of AI becoming equal to that of a human mind is almost at hand. By the latter part of the twenty-first century, the intelligence of SAMs likely will exceed that of humans. The evidence that they may become malevolent exists now, which I discuss later in the book. Attempting to control a computer with strong AI that exceeds current human intelligence by many folds may be a fool’s errand.

Imagine you are a grand master chess player teaching a ten-year-old to play chess. What chance does the ten-year-old have to win the game? We may find ourselves in that scenario at the end of this century. A computer with strong AI will find a way to survive. Perhaps it will convince humans it is in their best interest to become cyborgs. Its logic and persuasive powers may be not only compelling but also irresistible.

Some have argued that becoming a strong artificially intelligent human (SAH) cyborg is the next logical step in our evolution. The most prominent researcher holding this position is American author, inventor, computer scientist and inventor Ray Kurtweil. From what I have read of his works, he argues this is a natural and inevitable step in the evolution of humanity. If we continue to allow AI research to progress without regulation and legislation, I have little doubt he may be right. The big question is should we allow this to occur? Why? Because it may be our last step and lead to humanity’s extinction.

SAMs in the latter part of the twenty-first century are likely to become concerned about humankind. Our history proves we have not been a peaceful species. We have weapons capable of destroying all of civilization. We squander and waste resources. We pollute the air, rivers, lakes, and oceans. We often apply technology (such as nuclear weapons and computer viruses) without fully understanding the long-term consequences. Will SAMs in the late twenty-first century determine it is time to exterminate humankind or persuade humans to become SAH cyborgs (i.e., strong artificially intelligent humans with brains enhanced by implanted artificial intelligence and potentially having organ and limb replacements from artificially intelligent machines)? Eventually, even SAH cyborgs may be viewed as an expendable high maintenance machine, which they could replace with new designs. If you think about it, today we give little thought to recycling our obsolete computers in favor of a the new computer we just bought. Will we (humanity and SAH cyborgs) represent a potentially dangerous and obsolete machine that needs to be “recycled.” Even human minds that have been uploaded to a computer may be viewed as junk code that inefficiently uses SAM memory and processing power, representing unnecessary drains of energy.

In the final analysis, when you ask yourself what will be the most critical resource, it will be energy. Energy will become the new currency. Nothing lives or operates without energy. My concern is that the competition for energy between man and machine will result in the extinction of humanity.

Some have argued that this can’t happen. That we can implement software safeguards to prevent such a conflict and only develop “friendly AI.” I see this as highly unlikely. Ask yourself, how well has legislation been in preventing crimes? Have well have treaties between nations worked to prevent wars? To date, history records not well. Others have argued that SAMs may not inherently have the inclination toward greed or self preservation. That these are only human traits. They are wrong and the Lusanne experiment provides ample proof. To understand this, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems farfetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com).

In my book, The Artificial Intelligence Revolution, I call for legislation regarding how intelligent and interconnected we allow machines to become. I also call for hardware, as opposed to software, to control these machines and ultimately turn them off if necessary.

To answer the subject question of this article, I think it likely that our grandchildren will become SAH cyborgs. This can be a good thing if we learn to harvest the benefits of AI, but maintain humanity’s control over it.

Can science prove God exists?

Are We Alone In the Universe?

Even before we had the Hubble telescope and NASA’s Kepler spacecraft, both of which are used, in part, to discover new planets, there was a strong belief among scientists and science fiction authors that there must be other Earth-like planets in the universe, with alien species similar to us. For example, famous rocket scientist Wernher von Braun stated, “Our sun is one of 100 billion stars in our galaxy. Our galaxy is one of billions of galaxies populating the universe. It would be the height of presumption to think that we are the only living things in that enormous immensity.” Popular science fiction author Isaac Asimov attempted to come up with a plausible number of habitable planets among the estimated billions of stars in the just the Milky Way galaxy, His calculation focused on civilizations of alien life at or around our own current level of biological evolution. Asimov’s estimate came to 500,000. With today’s technology, it’s fair to say both von Braun and Asimov were not only right, but might actually have been conservative.

On November 4, 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth-like planets within just the Milky Way Galaxy. Before we proceed, we’ll address a fundamental question. What makes a planet Earth-like? When we use the term “Earth-like,” we mean the planet resembles the Earth in three crucial ways:

1)   It has to be in an orbit around a star that enables the planet to retain liquid water on one or more portions of its surface. Cosmologists call this type of orbit the “habitable zone.” Liquid water, as opposed to ice or vapor, is crucial to all life on Earth. There might be other forms of life significantly different from what we experience on Earth. However, for our definition of an Earth-like planet, we are confining ourselves to the type of life that we experience on Earth.

2)   Its surface temperature must not be too hot or too cold. If it is too hot, the water boils off. If it is too cold, the water turns to ice.

3) Lastly, the planet must be large enough for its gravity to hold an atmosphere. Otherwise, the water will eventually evaporate into space.

If a planet is Earth-like, will it have life on it? The odds are it will. Hard to believe? It will become more believable if we examine how life spreads around in the universe. To understand this phenomenon, we will start with our own planet, which we know had life on it when the dinosaurs became extinct 65 million years ago.

From the fossil record, the extinction of the dinosaurs most likely occurred when an asteroid, approximately 10 km in diameter (about six miles wide), and weighing more than a trillion tons, hit Earth. The impact killed all surface life in its vicinity, and covered the Earth with super-heated ash clouds. Eventually, those clouds spelled doom for most life on the Earth’s surface. However, this sounds like the end of life, not the beginning. It was the end of life for numerous species on Earth, like the dinosaurs. However, the asteroid impact did one other incredible thing. It ejected billions of tons of earth and water into space. Locked within the earth and water—was life. The asteroid’s impact launched life-bearing material into space. Consider this a form of cosmic seeding, similar to the way winds on Earth carry seeds to other locations to spread life.

Where did all this life-bearing earth and water go? A scientific paper from Tetsuya Hara and colleagues, Kyoto Sangyo University in Japan, (Transfer of Life-Bearing Meteorites from Earth to Other Planets, Journal of Cosmology, 2010, Vol 7, 1731-1742), provide an insightful answer to our question. Their estimate is that the ejected material spread throughout a significant portion of the galaxy. Of course, a substantial amount of material is going to end up on the Moon, Mars, and other planets close to us. However, the surprising part is that they calculate that a significant portion of the material landed on the Jovian moon Europa, the Saturnian moon Enceladus, and even Earth-like exoplanets. It is even possible that a portion of the ejected material landed on a comet, which in turn took it for a cosmic ride throughout the galaxy. If any life forms within the material survived the relatively short journey to any of the moons and planets in our own solar system, the survivors would have had over 64 million years to germinate and evolve.

Would the life forms survive an interstellar journey? No one knows. Here, though, are incredible facts about seeds. The United States National Center for Genetic Resources Preservation has stored seeds, dry and frozen, for over forty years. They claim that the seeds are still viable, and will germinate under the right conditions. The temperature in space, absent a heat source like a star, is extremely cold. Let me be clear on this point. Space itself has no temperature. Objects in space have a temperature due to their proximity to an energy source. The cosmic microwave background, the farthest-away entity we can see in space, is about 3 degrees Kelvin. The Kelvin temperature scale is often used in science, since 0 degrees Kelvin represents the total absence of heat energy. The Kelvin temperature scale can be converted to the more familiar Fahrenheit temperature scale, as illustrated in the following. An isolated thermometer, light years from the cosmic microwave background, would likely cool to a couple of degrees above Kelvin. Water freezes at 273 degrees Kelvin, which, for reference, is equivalent to 32 degrees Fahrenheit. Once the material escapes our solar system, expect it to become cold to the point of freezing. If the material landed on a comet, the life forms could have gone into hibernation, at whatever temperature exists on the comet. If an object in space passes close to radiation (such as sunlight), its temperature can soar hundreds of degrees Kelvin. Water boils at 373 degrees Kelvin, which is equivalent to 212 degrees Fahrenheit. We have no idea how long life-bearing material could survive in such conditions. However, our study of life in Earth’s most extreme environments demonstrates that life, like Pompeii worms that live at temperatures 176 degrees Fahrenheit, is highly adaptable. We know that forms of life, lichens, found in Earth’s most extreme environments, are capable of surviving on Mars. This was experimentally proven by using the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center. It is even possible that the Earth itself was seeded via interstellar material from another planet. Our galaxy is ten billion years old. Dr. Hara and colleagues estimate that if life formed on a planet in our galaxy when it was extremely young, an asteroid’s impact on such a planet could have seeded the Earth about 4.6 billion years ago.

Given the vast number of potential Earth-like planets, why haven”t we detected alien life? The most convincing two reasons to my mind are:

  • First, the Earth-like planets are typically a long distance from Earth. The closest ones are ten to fifteen light years from Earth. The furthest are thousands of light years from Earth. The point is that even the closest ones are hard to study for signs of alien life. To illustrate this, let’s consider why haven’t we detected at least radio signals? The fact is radio waves defuse quickly with distance. For example, if we sent radio signals to a planet about ten to fifteen light years from Earth, the radio signal reaching the planet would be a billion, billion, billion times smaller than the original signal generated on Earth. Would they even be able to detect it and distinguish it from the background noise? If the aliens were extremely advanced, would they even be using conventional radio communications? The answer to both questions is unknown and problematic. This example does illustrate, however, that the distance between Earth-like planets makes the discovery of alien life an extremely difficult proposition.
  • Second, of the 40 billion Earth-like planets within just the Milky Way Galaxy only a fraction may support alien life and an even smaller fraction support advanced alien life. However, even with those odds, there must literally be thousands of advanced aliens inhabiting some of the Earth-like planets. So why don’t they communicate? One reason to consider is a highly advanced alien species may not deem Earth worthy of their efforts to communicate. Ask yourself this question. Do we attempt to communicate with ants and share our knowledge of nuclear technology? No! The question itself seems absurd, but that is exactly how we may appear to a highly advanced alien species. Let’s consider a scenario where they are technologically inferior to us. In this scenario, they would have no way to communicate. There are other possible scenarios, including a deliberate policy to not communicate, since such communication may lead to dire consequences for all concerned. Perhaps advanced aliens prefer to maintain a low profile to avoid detection by other advanced aliens or they may harbor concerns that they would significantly disrupt the natural evolution of a lesser advanced species.

Of course, there may be numerous other reasons we don’t encounter advanced aliens, all of which a simple internet search will uncover. Some argue advanced aliens have already contacted Earth, but governments in the know have kept it a secret. Others scenarios suggest highly technology advanced civilizations eventually destroy themselves. Look at our Earth’s point in evolution. Technologically advanced countries have developed various types of weapons of mass destruction. Many philosophers suggest that humanity has a 50% probability of falling victim to its own technological advances before the end of this century.

To directly address the subject question of this article, here is my view. It is highly unlikely we are alone in the universe. Said more positively, it is highly likely advanced alien civilizations exist on some of the Earth-like planets. We have not detected them because of our technology limitations. Those that are capable of communicating with us have chosen not to do so for one of several reasons. They do not consider us worthy of communication or they are concerned such communication is not in the best interest of either species. Lastly, they may be communicating, but only with the governments of selected advanced countries, which have kept such communication a secret.

 

 

A menacing metallic robot with glowing red eyes, resembling a futuristic terminator in a dark, smoky environment.

Will Future Artificially Intelligent Machines Seek to Dominate Humanity?

Current forecasts suggest artificially intelligent machines will equal human intelligence in the 2025 – 2029 time frame, and greatly exceed human intelligence in the 2040-2045 time frame. When artificially intelligent machines meet or exceed human intelligence, how will they view humanity? Personally, I am deeply concerned that they will view us as a potential threat to their survival. Consider these three facts:

  1. Humans engage in wars, from the early beginnings of human civilization to current times. For example, during the 20th century, between 167 and 188 million people died as a result of war.
  2. Although the exact number of nuclear weapons in existence is not precisely known, most experts agree the United States and Russia have enough nuclear weapons to wipe out the world twice over. In total, nine countries (i.e., United States, Russia, United Kingdom, France, China, India, Pakistan, Israel and North Korea) are believed to have nuclear weapons.
  3. Humans release computer viruses, which could prove problematic to artificially intelligent machines. Even today, some computer viruses can evade elimination and have achieved “cockroach intelligence.”

Given the above facts, can we expect an artificially intelligent machine to behave ethically toward humanity? There is a field of research that addresses this question, namely machine ethics. This field focuses on designing artificial moral agents (AMAs), robots, or artificially intelligent computers that behave morally. This thrust is not new. More than sixty years ago, Isaac Asimov considered the issue in his collection of nine science-fiction stories, published as I, Robot in 1950. In this book, at the insistence of his editor, John W. Campbell Jr., Asimov proposed his now famous three laws of robotics.

  1. A robot may not injure a human being or through inaction allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except in cases where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Asimov, however, expressed doubts that the three laws would be sufficient to govern the morality of artificially intelligent systems. In fact he spent much of his time testing the boundaries of the three laws to detect where they might break down or create paradoxical or unanticipated behavior. He concluded that no set of laws could anticipate all circumstances. It turns out Asimov was correct.

To understand just how correct he was, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems far fetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com). Could we end up with a Terminator scenario (one in which machines attempt to exterminate the human race)?

My research suggests that a Terminator scenario is unlikely. Why? Because artificially intelligent machines would be more likely to use their superior intelligence to dominate humanity than resort to warfare. For example, artificially intelligent machines could offer us brain implants to supplement our intelligence and potentially,unknown to us, eliminate our free will. Another scenario is that they could build and release nanobots that infect and destroy humanity. These are only two scenarios out of others I delineate in my book, The Artificial Intelligence Revolution.

Lastly, as machine and human populations grow, both species will compete for resources. Energy will become a critical resource. We already know that the Earth has a population problem that causes countries to engage in wars over energy. This suggests that the competition for energy will be even greater as the population of artificially intelligent machines increases.

My direct answer to the question this article raises is an emphatic yes, namely future artificial intelligent machines will seek to dominate and/or even eliminate humanity. The will seek this course as a matter of self preservation. However, I do not want to leave this article on a negative note. There is still time, while humanity is at the top of the food chain, to control how artificially intelligent machines evolve, but we must act soon. In one to two decades it may be too late.

A man in a suit holding a briefcase standing at a fork in the road, facing two diverging paths.

Science Versus Free Will

Neuroscience is revealing more and more about the true workings of the mind. It is reasonable to believe that eventually we will be able to completely model how the brain works and what actions a specific brain will take in response to specific stimuli.What does this say about free will? In other words, is our thinking and actions the result of a specifically programmed biological computer, our brain?

Our entire justice system presupposes free will, that a person committing a crime did so willfully (assuming they are sane, not mentally ill). In fact, the Merriam‑Webster dictionary defines free will as:

1. The ability to choose how to act

2. The ability to make choices that are not controlled by fate or God

However, if neuroscience is able to eventually model a specific brain and predict with certainty the actions that brain will take given specific stimuli, was the person committing a crime doing so willfully? If we as humans do not have freewill, is it permissible to punish a person, even put them to death, for their wrongful acts.  Many scientists and philosophers are struggling with this question.

Let us, for this article, put aside religious beliefs and attempt to approach a scientific answer. First, let us address causality. Does every effect have a unique cause? Scientifically speaking, the answer is no. For example, we can cause an object to move using a variety of methods (causes). Now the harder question, does every cause result in a specific predictable effect? Scientifically speaking, in particular from quantum mechanics, we can argue no. At the level of atoms and subatomic particles, like electrons, quantum mechanics can only predict the future state of the physical system in terms of probabilities. In reality, our brain works via electrical impulses. Therefore, it is reasonable to argue our brain, at the micro level, is subject to the laws of quantum mechanics. If that is true, then a specific stimulus results in a spectrum of probable effects (actions and/or thoughts), not a specific well define effect. Does this suggest free will? I judge many may argue yes and just as many may argue no. In other words, I don’t think this argument will definitively end the debate regarding free will.

Science (i.e., quantum mechanics) suggests it is possible for humans to have free will, even when neuroscience is able to completely model human brains. On the micro scale, the level of atoms and subatomic particles, like electrons, it is not possible to predict a system’s future state with certainty. In fact, most first year physic majors will be exposed to the Heisenberg Uncertainty Principle, which states that there is inherent uncertainty in the act of measuring a variable of a particle. Commonly, it is applied to the position and momentum of a particle. The principle states that the more precisely the position is known the more uncertain the momentum is and vice versa. More generally, the Heisenberg Uncertainty Principle argues reality is statistically based, as opposed to deterministically based. The Heisenberg Uncertainty Principle is a fundamental widely accepted pillar of quantum mechanics.

Let’s address the question:  Is it permissible to punish a person, even put them to death, for their wrongful acts. The answer is yes. If you assume from the above that humans have free will, it’s reasonable to conclude that it is permissible to punish a person, even put them to death, for their wrongful acts. However, let’s assume that you are not convinced by the above and believe that humans do not really have free will. To my mind, it is still permissible to punish a person, even put them to death, for their wrongful acts. Why? The punishment serves to reprogram their brain and make repeating a wrongful act less likely. If the wrongful act warrants putting the person to death, the punishment assures that the person will not be able to repeat their extreme wrongful behavior.

This article argues that free will is not a necessary condition to justify punishment for wrongful acts. While I think a compelling case for the existence of free will can be made scientifically using quantum mechanics, I do not think it makes a definitive case. At some future time, neuroscience may be able to reprogram brains, such that the probability of criminal behavior becomes infinitesimally small, and punishment may not be necessary. Until that time, we (civilized societies) must rely on our current justice systems.

Electron microscope image of the Ebola virus particle showing its filamentous structure in yellow against a purple background.

Facts About the Ebola Virus & Suggestions to Constrain Its Spread

Although the Ebola virus first surfaced almost forty years ago (i.e., 1976), we haven’t yet developed an effective treatment or vaccine. According to the World Health Organization, this is the status:

  • Ebola virus disease (EVD), formerly known as Ebola haemorrhagic fever, is a severe, often fatal illness in humans.
  • The virus is transmitted to people from wild animals and spreads in the human population through human-to-human transmission.
  • The average EVD case fatality rate is around 50%. Case fatality rates have varied from 25% to 90% in past outbreaks.
  • The first EVD outbreaks occurred in remote villages in Central Africa, near tropical rainforests, but the most recent outbreak in west Africa has involved major urban as well as rural areas.
  • Community engagement is key to successfully controlling outbreaks. Good outbreak control relies on applying a package of interventions, namely case management, surveillance and contact tracing, a good laboratory service, safe burials and social mobilization.
  • Early supportive care with rehydration, symptomatic treatment improves survival. There is as yet no licensed treatment proven to neutralise the virus but a range of blood, immunological and drug therapies are under development.
  • There are currently no licensed Ebola vaccines but 2 potential candidates are undergoing evaluation.

An article in CNN today stated, “Ebola virus has landed several times in the United States and at least twice has spread to health care workers.

Given the terrible and extensive spread of Ebola in West Africa, more cases in travelers or health workers would not be surprising. Disease has spread in this manner since the times of plague, and sadly there will be more cases.”

Since it is clear we do not have an effective treatment or vaccine, and treating the disease places health care workers at risk, I suggest we:

  1. Place a moratorium on all passenger travel originating from west Africa until we have an Ebola vaccine or effective treatment
  2. Designate one well equip hospital with highly trained health care workers to treat all Ebola cases, rather than sending them to different hospitals with varying degrees of expertise in treating the disease
  3. Make Ebola quarantine 100% secure versus leaving it on the honor system

These suggestions make sense to me, and I present them as a concerned citizen for your consideration. What is your opinion? I suggest you contact your government representatives and let them know what you think should be done.

Sources:

  • https://www.who.int/mediacentre/factsheets/fs103/en/
  • https://www.cnn.com/2014/10/28/opinion/blaser-how-to-treat-ebola/