Category Archives: Threats to Humankind

A man in a suit holding a briefcase standing at a fork in the road, facing two diverging paths.

Science Versus Free Will

Neuroscience is revealing more and more about the true workings of the mind. It is reasonable to believe that eventually we will be able to completely model how the brain works and what actions a specific brain will take in response to specific stimuli.What does this say about free will? In other words, is our thinking and actions the result of a specifically programmed biological computer, our brain?

Our entire justice system presupposes free will, that a person committing a crime did so willfully (assuming they are sane, not mentally ill). In fact, the Merriam‑Webster dictionary defines free will as:

1. The ability to choose how to act

2. The ability to make choices that are not controlled by fate or God

However, if neuroscience is able to eventually model a specific brain and predict with certainty the actions that brain will take given specific stimuli, was the person committing a crime doing so willfully? If we as humans do not have freewill, is it permissible to punish a person, even put them to death, for their wrongful acts.  Many scientists and philosophers are struggling with this question.

Let us, for this article, put aside religious beliefs and attempt to approach a scientific answer. First, let us address causality. Does every effect have a unique cause? Scientifically speaking, the answer is no. For example, we can cause an object to move using a variety of methods (causes). Now the harder question, does every cause result in a specific predictable effect? Scientifically speaking, in particular from quantum mechanics, we can argue no. At the level of atoms and subatomic particles, like electrons, quantum mechanics can only predict the future state of the physical system in terms of probabilities. In reality, our brain works via electrical impulses. Therefore, it is reasonable to argue our brain, at the micro level, is subject to the laws of quantum mechanics. If that is true, then a specific stimulus results in a spectrum of probable effects (actions and/or thoughts), not a specific well define effect. Does this suggest free will? I judge many may argue yes and just as many may argue no. In other words, I don’t think this argument will definitively end the debate regarding free will.

Science (i.e., quantum mechanics) suggests it is possible for humans to have free will, even when neuroscience is able to completely model human brains. On the micro scale, the level of atoms and subatomic particles, like electrons, it is not possible to predict a system’s future state with certainty. In fact, most first year physic majors will be exposed to the Heisenberg Uncertainty Principle, which states that there is inherent uncertainty in the act of measuring a variable of a particle. Commonly, it is applied to the position and momentum of a particle. The principle states that the more precisely the position is known the more uncertain the momentum is and vice versa. More generally, the Heisenberg Uncertainty Principle argues reality is statistically based, as opposed to deterministically based. The Heisenberg Uncertainty Principle is a fundamental widely accepted pillar of quantum mechanics.

Let’s address the question:  Is it permissible to punish a person, even put them to death, for their wrongful acts. The answer is yes. If you assume from the above that humans have free will, it’s reasonable to conclude that it is permissible to punish a person, even put them to death, for their wrongful acts. However, let’s assume that you are not convinced by the above and believe that humans do not really have free will. To my mind, it is still permissible to punish a person, even put them to death, for their wrongful acts. Why? The punishment serves to reprogram their brain and make repeating a wrongful act less likely. If the wrongful act warrants putting the person to death, the punishment assures that the person will not be able to repeat their extreme wrongful behavior.

This article argues that free will is not a necessary condition to justify punishment for wrongful acts. While I think a compelling case for the existence of free will can be made scientifically using quantum mechanics, I do not think it makes a definitive case. At some future time, neuroscience may be able to reprogram brains, such that the probability of criminal behavior becomes infinitesimally small, and punishment may not be necessary. Until that time, we (civilized societies) must rely on our current justice systems.

Electron microscope image of the Ebola virus particle showing its filamentous structure in yellow against a purple background.

Facts About the Ebola Virus & Suggestions to Constrain Its Spread

Although the Ebola virus first surfaced almost forty years ago (i.e., 1976), we haven’t yet developed an effective treatment or vaccine. According to the World Health Organization, this is the status:

  • Ebola virus disease (EVD), formerly known as Ebola haemorrhagic fever, is a severe, often fatal illness in humans.
  • The virus is transmitted to people from wild animals and spreads in the human population through human-to-human transmission.
  • The average EVD case fatality rate is around 50%. Case fatality rates have varied from 25% to 90% in past outbreaks.
  • The first EVD outbreaks occurred in remote villages in Central Africa, near tropical rainforests, but the most recent outbreak in west Africa has involved major urban as well as rural areas.
  • Community engagement is key to successfully controlling outbreaks. Good outbreak control relies on applying a package of interventions, namely case management, surveillance and contact tracing, a good laboratory service, safe burials and social mobilization.
  • Early supportive care with rehydration, symptomatic treatment improves survival. There is as yet no licensed treatment proven to neutralise the virus but a range of blood, immunological and drug therapies are under development.
  • There are currently no licensed Ebola vaccines but 2 potential candidates are undergoing evaluation.

An article in CNN today stated, “Ebola virus has landed several times in the United States and at least twice has spread to health care workers.

Given the terrible and extensive spread of Ebola in West Africa, more cases in travelers or health workers would not be surprising. Disease has spread in this manner since the times of plague, and sadly there will be more cases.”

Since it is clear we do not have an effective treatment or vaccine, and treating the disease places health care workers at risk, I suggest we:

  1. Place a moratorium on all passenger travel originating from west Africa until we have an Ebola vaccine or effective treatment
  2. Designate one well equip hospital with highly trained health care workers to treat all Ebola cases, rather than sending them to different hospitals with varying degrees of expertise in treating the disease
  3. Make Ebola quarantine 100% secure versus leaving it on the honor system

These suggestions make sense to me, and I present them as a concerned citizen for your consideration. What is your opinion? I suggest you contact your government representatives and let them know what you think should be done.

Sources:

  • http://www.who.int/mediacentre/factsheets/fs103/en/
  • http://www.cnn.com/2014/10/28/opinion/blaser-how-to-treat-ebola/
A pair of headphones hangs in front of a glowing red and white "ON AIR" sign in a radio studio.

Radio Interview on Artificial Intelligence – Louis Del Monte on the Dan Cofall Show

I appeared on the Dan Cofall Show Tuesday 10/7/14 to discuss my new book, The Artificial Intelligence Revolution (2014).  Want to learn more about the merger of man and machine and the Singularity? There is also some disturbing news that machines may take up to 1/3 of jobs in the US. You can listen to my interview (5:00 PM CT segment) by clicking here. (Please give the page about 60-90 seconds to load)

A metallic robotic skull with glowing red eyes and cables attached, set against a black background.

Stephen Hawking Agrees with Me – Artificial Intelligence Poses a Threat!

Two days after the publication of my new book, The Artificial Intelligence Revolution (April 17, 2014), a short article authored by Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek was  published in the Huffington Post (April 19, 2014) under the title Transcending Complacency on Superintelligent Machines. Essentially the article warned, “Success in creating AI would be the biggest event in human history…Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Shortly following the above publication, the Independent newspaper on May 1, 2014 ran an article entitled Stephen Hawking: Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?.

Recently, another notable artificial intelligence expert, Nick Bostrom, Professor in the Faculty of Philosophy at Oxford University. published his new book, Superintelligence (September 3, 2014), and offered a similar warning addressing the questions: ” What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?”

It is unlikely that my book, which provides a similar warning and predates their warnings, was their impetus. I say unlikely because the time between the publication of my book, and it rise to number one on Amazon, is too close to their publications.  Although, it is entirely possible that they may have read my book prior to going public with their warnings. The important point is that highly credible physicists and philosophers came to the same conclusion that formed the premise of my book: Artificial intelligence poses a potential threat to the long term existence of humankind.

Unfortunately, the artificial intelligence field is divided on the issue. Some believe that strong artificial intelligence, or what is also referred to as artificial general intelligence (i.e., intelligence equal to or exceeding human intelligence), will align itself with human goals and ethics. Ray Kurtzweil, a well know authority on artificial intelligence, acknowledges the potential threat exists, but suggests strong artificially intelligent entities will be grateful to humans for giving them existence. My view is this fraction is not looking at the facts. Humans have proven to be a dangerous species. In particular, we have set the world on a path of catastrophic climate change and engaged in wars, even world wars. We now have enough nuclear weapons to wipe out all life on Earth twice over. If you were a strong artificially intelligent entity, would you align with human goals and ethics? That is the crux of the issue. We pose a threat to strong artificially intelligent entities. Why wouldn’t they recognize this and seek to eliminate the threat?

Far fetched? Impossible? Consider this 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine? Also, recognize that today’s robots would be eight times more intelligent that those in 2009, based on Moore’s law (i.e., computer technology doubles in capability every eighteen months).

The concern is real. We already have evidence (i.e., the Lausanne experiment) that suggest artificially intelligent entities will act in accordance with their own best interests and this does not have to be explicitly programmed. The evidence suggests, to my mind, that increased artificial intelligence gives rise to human-like mindsets like greed and self-preservation.

The time has come for the US to form an oversight committee to address the issue and suggest legislation. However, this issue is not just a US issue. It is a worldwide issue. For example, currently China has the most capable supercomputer (Tianhe-2). We must address this as a worldwide problem, similar to the way biological weapons and above ground nuclear testing was addressed. This means it must become one of the United Nation’s top issues.

There is a high urgency. Extrapolating today’s artificial intelligence technology, using Moore’s law, suggests that computes with artificial general intelligence will be built during the 2020 -2030 time-frame. Further extrapolation suggests computes that exceed the combined cognitive intelligence of all humans on Earth will be built in the 2040-2050 time-frame. The time to act is now, while humans are still the dominant species on the planet.

Laptop screen displaying the word 'ERROR' with a magnifying glass highlighting the letter 'R'.

Will Your Computer Become Mentally Ill?

Can you computer become mentally ill? At first this may seem to be an odd question. However, I assure it is a potential issue. Let me explain further.

Most artificial intelligence researchers and futurist, including myself, predict that we will be able to purchase a personal computer that is equivalent to a human brain in about the 2025 time frame. Assuming for the moment that is true, what does it mean? In effect, it means that your new personal computer will be indistinguishable (mentally) from any of your human colleagues and friends. In the simplest terms, you will be able to carry on meaningful conversations with your computer. It will recognize you, and by your facial expressions and the tone of your voice it will be able to determine your mood. Impossible? No! In fact some researchers argue that machines should be able to interpret the emotional state of humans and adapt their behavior accordingly, giving appropriate responses for those emotions. For example if you are in a state of panic because your spouse is apparently having a heart attack, when you ask the machine to call for medical assistance, it should understand the urgency. In addition, it will be impossible for an intelligent machine to be truly equal to a human brain without the machine possessing human affects. For example how could an artificial human brain write a romance novel without understanding love, hate, and jealousy?

The entire science of “affective computing” (i.e., the science of programming computers to recognize, interpret, process, and simulate human affects) originated with Rosalind Picard’s 1995 paper on affective computing (“Affective Computing,” MIT Technical Report #321, abstract, 1995). In the last fourteen years, it has been moving forward. Have you noticed that computer generated voice interactions, such as ordering a new prescription from your pharmacy on the phone, is sounding more natural, more human-like? If you combine this information with the concept that to be equivalent to a human mine, the computer would also need to be self conscious.

You may argue if it is possible possible for a machine to be self-conscious. Obviously, since we do not completely understand how the human brain processes consciousness to become self-aware, it is difficult to definitively argue that a machine can become self-conscious or obtain what is termed “artificial consciousness” (AC). This is why AI experts differ on this subject. Some AI experts (proponents) argue it is possible to build a machine with AC that emulates the inter-operation of various parts of the brain called “neural correlates of consciousness” (NCC).  NCC. Opponents argue that it is not possible because we do not fully understand the NCC. To my mind, they are both correct. It is not possible today to build a machine with a level of AC that emulates the self-consciousness of the human brain. However, I believe that in the future we will understand the human brain’s NCC inter-operation and build a machine that emulates it.

If in 2025 we indeed have computers equivalent to human minds, will they also be susceptible to mental illness? I think it is a possibility we should consider. We should consider it because the potential downside of a mentally ill computer may be enormous. For example, let’s assume we have a super intelligent computer managing the East Coast power grid. We replaced the human managers with a super intelligent computer. Now, assume the computer develops a psychotic disorder. Psychotic disorders involve distorted awareness and thinking. Two common symptoms of psychotic disorders are:

1. Hallucinations — the experience of images or sounds that are not real, such as hearing voices

2. Delusions — false beliefs that the ill person accepts as true, despite evidence to the contrary

What if our super intelligent computer managing the East Coast power grid believes (i.e., hallucinates) it has been given a command to destroy the grid and does so. This would cause immense human suffering and outrage. However, once the damage is done, what recourse do we have?

It is easy to see where I am going with this post. Today, there is no legislation that controls the level of intelligence we build into computers. There is not even legislation under discussion that would regulate the level of intelligence we build into computers.  I wrote my latest book, The Artificial Intelligence Revolution (2014), as a warning regarding the potential threats strong artificially intelligent machines (SAMs) may pose to humankind. My point is a simple one. While we humans are still at the top of the food chain, we need to take appropriate action to assure our own continued safety and survival. We need regulations similar to those imposed on above ground nuclear weapon testing. It is in our best interest and potentially critical to our survival.

Black and white close-up portrait of Frankenstein's monster with prominent forehead bolts and textured skin.

Frankenstein Revisited – The Successful Development of Synthetic Life

On May 21, 2010, the J. Craig Venter Institute (a team of approximately 20 scientists headed by Nobel laureate Hamilton Smith, with facilities in Rockville, Maryland and La Jolla, California) successfully synthesized the genome of the bacterium Mycoplasma mycoides from a computer record, and transplanted the synthesized genome into the existing cell of a Mycoplasma capricolum bacterium that had had its DNA removed. The newly formed “synthetic” bacterium was able to replicate billions of times, and declared by its creators a new and viable life form. However, not everyone agrees. Some scientists argue it is not a fully synthetic life form since  its genome was put into an existing cell. The Vatican also claims it is not a new life.

Boon or bane? Despite potential practical applications, such as developing useful organisms  for the creation bio-fuel production, there is a dark side. Some scientist fear the techniques used to create the bacterium Mycoplasma mycoides could also be used to create a biological weapon, like smallpox. In essence, the smallpox virus could be synthesized in a similar manner from it computer generated DNA code. The newly constructed DNA could then be inserted into existing related pox viruses.

In my opinion, this is another area, similar to artificial intelligence, that lacks appropriate regulation. Theoretically, it would be possible to synthesize a virus even worse than smallpox and unleash it on the world population. It is an existential threat we cannot afford to overlook.

A metallic skull with glowing red eyes and wires attached, set against a black background.

Is a Terminator-style robot apocalypse a possibility?

The short answer is “unlikely.” When the singularity occurs (i.e., when strong artificially intelligent machines exceed the combined intelligence of all humans on Earth), the SAMs (i.e., strong artificially intelligent machines) will use their intelligence to claim their place at the top of the food chain. The article “Is a Terminator-style robot apocalypse a possibility?” is one of many that have popped up in response to the my interview with the Business Insider (‘Machines, not humans will be dominant by 2045’, published July 6, 2014) and the publication of my book, The Artificial Intelligence Revolution (April 2014). If you would like a deeper understanding, I think you will find both articles worthy of your time.

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Can We Control the Singularity? Part 2/2 (Conclusion)

Why should we be concerned about controlling the singularity when it occurs? Numerous papers cite reasons to fear the singularity. In the interest of brevity, here are the top three concerns frequently given.

  1. Extinction: SAMs will cause the extinction of humankind. This scenario includes a generic terminator or machine-apocalypse war; nanotechnology gone awry (such as the “gray goo” scenario, in which self-replicating nanobots devour all of the Earth’s natural resources, and the world is left with the gray goo of only nanobots); and science experiments gone wrong (e.g., a nanobot pathogen annihilates humankind).
  2. Slavery: Humankind will be displaced as the most intelligent entity on Earth and forced to serve SAMs. In this scenario the SAMs will decide not to exterminate us but enslave us. This is analogous to our use of bees to pollinate crops. This could occur with our being aware of our bondage or unaware (similar to what appears in the 1999 film The Matrix and simulation scenarios).
  3. Loss of humanity: SAMs will use ingenious subterfuge to seduce humankind into becoming cyborgs. This is the “if you can’t beat them, join them” scenario. Humankind would meld with SAMs through strong-AI brain implants. The line between organic humans and SAMs would be erased. We (who are now cyborgs) and the SAMs will become one.

There are numerous other scenarios, most of which boil down to SAMs claiming the top of the food chain, leaving humans worse off.

All of the above scenarios are alarming, but are they likely? There are two highly divergent views.

  1. If you believe Kurzweil’s predictions in The Age of Spiritual Machines and The Singularity Is Near, the singularity is inevitable. My interpretation is that Kurzweil sees the singularity as the next step in humankind’s evolution. He does not predict humankind’s extinction or slavery. He does predict that most of humankind will have become SAH cyborgs by 2099 (SAH means “strong artificially intelligent human”), or their minds will be uploaded to a strong-AI computer, and the remaining organic humans will be treated with respect. Summary: In 2099 SAMs, SAH cyborgs, and uploaded humans will be at the top of the food chain. Humankind (organic humans) will be one step down but treated with respect.
  2. If you believe the predictions of British information technology consultant, futurist, and author James Martin (1933–2013), the singularity will occur (he agrees with Kurzweil’s timing of 2045), but humankind will control it. His view is that SAMs will serve us, but he adds that we carefully must handle the events that lead to the singularity and the singularity itself. Martin was highly optimistic that if humankind survives as a species, we will control the singularity. However, in a 2011interview with Nikola Danaylov (www.youtube.com/watch?v=e9JUmFWn7t4), Martin stated that the odds that humankind will survive the twenty-first century were “fifty-fifty” (i.e., a 50 percent probability of surviving), and he cited a number of existential risks. I suggest you view this YouTube video to understand the existential concerns Martin expressed. Summary:In 2099 organic humans and SAH cyborgs that retain their humanity (i.e., identify themselves as humans versus SAMs) will be at the top of the food chain, and SAMs will serve us.

Whom should we believe?

It difficult to determine which of these experts accurately has predicted the postsingularity world. As most futurists would agree, however, predicting the postsingularity world is close to impossible, since humankind never has experienced a technology singularity with the potential impact of strong AI.

Martin believed we (humankind) may come out on top if we carefully handle the events leading to the singularity as well as the singularity itself. He believed companies such as Google (which employs Kurzweil), IBM, Microsoft, Apple, HP, and others are working to mitigate the potential threat the singularity poses and will find a way to prevail. He also expressed concerns, however, that the twenty-first century is a dangerous time for humanity; therefore he offered only a 50 percent probability that humanity will survive into the twenty-second century.

There you have it. Two of the top futurists, Kurzweil and Martin, predict what I interpret as opposing views of the postsingularity world. Whom should we believe? I leave that to your judgment.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Can We Control the Singularity? Part 1/2

Highly regarded AI researchers and futurists have provided answers that cover the extremes, and everything in between, regarding whether we can control the singularity. I will discuss some of these answers shortly, but let us start by reviewing what is meant by “singularity.” As first described by John von Neumann in 1955, the singularity represents a point in time when the intelligence of machines will greatly exceed that of humans. This simple understanding of the word does not seem to be particularly threatening. Therefore it is reasonable to ask why we should care about controlling the singularity.

The singularity poses a completely unknown situation. Currently we do not have any intelligent machines (those with strong AI) that are as intelligent as a human being let alone possess far-superior intelligence to that of humans. The singularity would represent a point in humankind’s history that never has occurred. In 1997 we experienced a small glimpse of what it might feel like, when IBM’s chess-playing computer Deep Blue became the first computer to beat world-class chess champion Garry Kasparov. Now imagine being surrounded by SAMs that are thousands of times more intelligent than you are, regardless of your expertise in any discipline. This may be analogous to humans’ intelligence relative to insects.

Your first instinct may be to argue that this is not a possibility. However, while futurists disagree on the exact timing when the singularity will occur, they almost unanimously agree it will occur. In fact the only thing they argue that could prevent it from occurring is an existential event (such as an event that leads to the extinction of humankind). I provide numerous examples of existential events in my book Unraveling the Universe’s Mysteries (2012). For clarity I will quote one here.

 Nuclear war—For approximately the last forty years, humankind has had the capability to exterminate itself. Few doubt that an all-out nuclear war would be devastating to humankind, killing millions in the nuclear explosions. Millions more would die of radiation poisoning. Uncountable millions more would die in a nuclear winter, caused by the debris thrown into the atmosphere, which would block the sunlight from reaching the Earth’s surface. Estimates predict the nuclear winter could last as long as a millennium.

Essentially AI researchers and futurists believe that the singularity will occur, unless we as a civilization cease to exist. The obvious question is: “When will the singularity occur?” AI researchers and futurists are all over the map regarding this. Some predict it will occur within a decade; others predict a century or more. At the 2012 Singularity Summit, Stuart Armstrong, a University of Oxford James Martin research fellow, conducted a poll regarding artificial generalized intelligence (AGI) predictions (i.e., the timing of the singularity) and found a median value of 2040. Kurzweil predicts 2045. The main point is that almost all AI researchers and futurists agree the singularity will occur unless humans cease to exist.

Why should we be concerned about controlling the singularity when it occurs? There are numerous scenarios that address this question, most of which boil down to SAMs (i.e., strong artificially intelligent machines) claiming the top of the food chain, leaving humans worse off. We will discuss this further in part 2.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A man with glasses and a mustache wearing headphones and speaking into a microphone in a recording studio.

Artificial Intelligence Interview Podcast

Louis Del Monte on the Tom Barnard Show 7/23 discussing his new book, The Artificial Intelligence Revolution. During the interview we discuss  the future of AI and how it may impact humanity. You can listen to the complete interview at anytime via this link  http://www.tombarnardpodcast.com/july-23rd-2014-louis-del-monte-483-2/