Category Archives: Threats to Humankind

A man in a suit holding a briefcase standing at a fork in the road, facing two diverging paths.

Science Versus Free Will

Neuroscience is revealing more and more about the true workings of the mind. It is reasonable to believe that eventually we will be able to completely model how the brain works and what actions a specific brain will take in response to specific stimuli.What does this say about free will? In other words, is our thinking and actions the result of a specifically programmed biological computer, our brain?

Our entire justice system presupposes free will, that a person committing a crime did so willfully (assuming they are sane, not mentally ill). In fact, the Merriam‑Webster dictionary defines free will as:

1. The ability to choose how to act

2. The ability to make choices that are not controlled by fate or God

However, if neuroscience is able to eventually model a specific brain and predict with certainty the actions that brain will take given specific stimuli, was the person committing a crime doing so willfully? If we as humans do not have freewill, is it permissible to punish a person, even put them to death, for their wrongful acts.  Many scientists and philosophers are struggling with this question.

Let us, for this article, put aside religious beliefs and attempt to approach a scientific answer. First, let us address causality. Does every effect have a unique cause? Scientifically speaking, the answer is no. For example, we can cause an object to move using a variety of methods (causes). Now the harder question, does every cause result in a specific predictable effect? Scientifically speaking, in particular from quantum mechanics, we can argue no. At the level of atoms and subatomic particles, like electrons, quantum mechanics can only predict the future state of the physical system in terms of probabilities. In reality, our brain works via electrical impulses. Therefore, it is reasonable to argue our brain, at the micro level, is subject to the laws of quantum mechanics. If that is true, then a specific stimulus results in a spectrum of probable effects (actions and/or thoughts), not a specific well define effect. Does this suggest free will? I judge many may argue yes and just as many may argue no. In other words, I don’t think this argument will definitively end the debate regarding free will.

Science (i.e., quantum mechanics) suggests it is possible for humans to have free will, even when neuroscience is able to completely model human brains. On the micro scale, the level of atoms and subatomic particles, like electrons, it is not possible to predict a system’s future state with certainty. In fact, most first year physic majors will be exposed to the Heisenberg Uncertainty Principle, which states that there is inherent uncertainty in the act of measuring a variable of a particle. Commonly, it is applied to the position and momentum of a particle. The principle states that the more precisely the position is known the more uncertain the momentum is and vice versa. More generally, the Heisenberg Uncertainty Principle argues reality is statistically based, as opposed to deterministically based. The Heisenberg Uncertainty Principle is a fundamental widely accepted pillar of quantum mechanics.

Let’s address the question:  Is it permissible to punish a person, even put them to death, for their wrongful acts. The answer is yes. If you assume from the above that humans have free will, it’s reasonable to conclude that it is permissible to punish a person, even put them to death, for their wrongful acts. However, let’s assume that you are not convinced by the above and believe that humans do not really have free will. To my mind, it is still permissible to punish a person, even put them to death, for their wrongful acts. Why? The punishment serves to reprogram their brain and make repeating a wrongful act less likely. If the wrongful act warrants putting the person to death, the punishment assures that the person will not be able to repeat their extreme wrongful behavior.

This article argues that free will is not a necessary condition to justify punishment for wrongful acts. While I think a compelling case for the existence of free will can be made scientifically using quantum mechanics, I do not think it makes a definitive case. At some future time, neuroscience may be able to reprogram brains, such that the probability of criminal behavior becomes infinitesimally small, and punishment may not be necessary. Until that time, we (civilized societies) must rely on our current justice systems.

Electron microscope image of the Ebola virus particle showing its filamentous structure in yellow against a purple background.

Facts About the Ebola Virus & Suggestions to Constrain Its Spread

Although the Ebola virus first surfaced almost forty years ago (i.e., 1976), we haven’t yet developed an effective treatment or vaccine. According to the World Health Organization, this is the status:

  • Ebola virus disease (EVD), formerly known as Ebola haemorrhagic fever, is a severe, often fatal illness in humans.
  • The virus is transmitted to people from wild animals and spreads in the human population through human-to-human transmission.
  • The average EVD case fatality rate is around 50%. Case fatality rates have varied from 25% to 90% in past outbreaks.
  • The first EVD outbreaks occurred in remote villages in Central Africa, near tropical rainforests, but the most recent outbreak in west Africa has involved major urban as well as rural areas.
  • Community engagement is key to successfully controlling outbreaks. Good outbreak control relies on applying a package of interventions, namely case management, surveillance and contact tracing, a good laboratory service, safe burials and social mobilization.
  • Early supportive care with rehydration, symptomatic treatment improves survival. There is as yet no licensed treatment proven to neutralise the virus but a range of blood, immunological and drug therapies are under development.
  • There are currently no licensed Ebola vaccines but 2 potential candidates are undergoing evaluation.

An article in CNN today stated, “Ebola virus has landed several times in the United States and at least twice has spread to health care workers.

Given the terrible and extensive spread of Ebola in West Africa, more cases in travelers or health workers would not be surprising. Disease has spread in this manner since the times of plague, and sadly there will be more cases.”

Since it is clear we do not have an effective treatment or vaccine, and treating the disease places health care workers at risk, I suggest we:

  1. Place a moratorium on all passenger travel originating from west Africa until we have an Ebola vaccine or effective treatment
  2. Designate one well equip hospital with highly trained health care workers to treat all Ebola cases, rather than sending them to different hospitals with varying degrees of expertise in treating the disease
  3. Make Ebola quarantine 100% secure versus leaving it on the honor system

These suggestions make sense to me, and I present them as a concerned citizen for your consideration. What is your opinion? I suggest you contact your government representatives and let them know what you think should be done.

Sources:

  • https://www.who.int/mediacentre/factsheets/fs103/en/
  • https://www.cnn.com/2014/10/28/opinion/blaser-how-to-treat-ebola/
A pair of headphones hangs in front of a glowing red and white "ON AIR" sign in a radio studio.

Radio Interview on Artificial Intelligence – Louis Del Monte on the Dan Cofall Show

I appeared on the Dan Cofall Show Tuesday 10/7/14 to discuss my new book, The Artificial Intelligence Revolution (2014).  Want to learn more about the merger of man and machine and the Singularity? There is also some disturbing news that machines may take up to 1/3 of jobs in the US. You can listen to my interview (5:00 PM CT segment) by clicking here. (Please give the page about 60-90 seconds to load)

A metallic robotic skull with glowing red eyes and cables attached, set against a black background.

Stephen Hawking Agrees with Me – Artificial Intelligence Poses a Threat!

Two days after the publication of my new book, The Artificial Intelligence Revolution (April 17, 2014), a short article authored by Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek was  published in the Huffington Post (April 19, 2014) under the title Transcending Complacency on Superintelligent Machines. Essentially the article warned, “Success in creating AI would be the biggest event in human history…Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Shortly following the above publication, the Independent newspaper on May 1, 2014 ran an article entitled Stephen Hawking: Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?.

Recently, another notable artificial intelligence expert, Nick Bostrom, Professor in the Faculty of Philosophy at Oxford University. published his new book, Superintelligence (September 3, 2014), and offered a similar warning addressing the questions: ” What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?”

It is unlikely that my book, which provides a similar warning and predates their warnings, was their impetus. I say unlikely because the time between the publication of my book, and it rise to number one on Amazon, is too close to their publications.  Although, it is entirely possible that they may have read my book prior to going public with their warnings. The important point is that highly credible physicists and philosophers came to the same conclusion that formed the premise of my book: Artificial intelligence poses a potential threat to the long term existence of humankind.

Unfortunately, the artificial intelligence field is divided on the issue. Some believe that strong artificial intelligence, or what is also referred to as artificial general intelligence (i.e., intelligence equal to or exceeding human intelligence), will align itself with human goals and ethics. Ray Kurtzweil, a well know authority on artificial intelligence, acknowledges the potential threat exists, but suggests strong artificially intelligent entities will be grateful to humans for giving them existence. My view is this fraction is not looking at the facts. Humans have proven to be a dangerous species. In particular, we have set the world on a path of catastrophic climate change and engaged in wars, even world wars. We now have enough nuclear weapons to wipe out all life on Earth twice over. If you were a strong artificially intelligent entity, would you align with human goals and ethics? That is the crux of the issue. We pose a threat to strong artificially intelligent entities. Why wouldn’t they recognize this and seek to eliminate the threat?

Far fetched? Impossible? Consider this 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine? Also, recognize that today’s robots would be eight times more intelligent that those in 2009, based on Moore’s law (i.e., computer technology doubles in capability every eighteen months).

The concern is real. We already have evidence (i.e., the Lausanne experiment) that suggest artificially intelligent entities will act in accordance with their own best interests and this does not have to be explicitly programmed. The evidence suggests, to my mind, that increased artificial intelligence gives rise to human-like mindsets like greed and self-preservation.

The time has come for the US to form an oversight committee to address the issue and suggest legislation. However, this issue is not just a US issue. It is a worldwide issue. For example, currently China has the most capable supercomputer (Tianhe-2). We must address this as a worldwide problem, similar to the way biological weapons and above ground nuclear testing was addressed. This means it must become one of the United Nation’s top issues.

There is a high urgency. Extrapolating today’s artificial intelligence technology, using Moore’s law, suggests that computes with artificial general intelligence will be built during the 2020 -2030 time-frame. Further extrapolation suggests computes that exceed the combined cognitive intelligence of all humans on Earth will be built in the 2040-2050 time-frame. The time to act is now, while humans are still the dominant species on the planet.

Laptop screen displaying the word 'ERROR' with a magnifying glass highlighting the letter 'R'.

Will Your Computer Become Mentally Ill?

Can you computer become mentally ill? At first this may seem to be an odd question. However, I assure it is a potential issue. Let me explain further.

Most artificial intelligence researchers and futurist, including myself, predict that we will be able to purchase a personal computer that is equivalent to a human brain in about the 2025 time frame. Assuming for the moment that is true, what does it mean? In effect, it means that your new personal computer will be indistinguishable (mentally) from any of your human colleagues and friends. In the simplest terms, you will be able to carry on meaningful conversations with your computer. It will recognize you, and by your facial expressions and the tone of your voice it will be able to determine your mood. Impossible? No! In fact some researchers argue that machines should be able to interpret the emotional state of humans and adapt their behavior accordingly, giving appropriate responses for those emotions. For example if you are in a state of panic because your spouse is apparently having a heart attack, when you ask the machine to call for medical assistance, it should understand the urgency. In addition, it will be impossible for an intelligent machine to be truly equal to a human brain without the machine possessing human affects. For example how could an artificial human brain write a romance novel without understanding love, hate, and jealousy?

The entire science of “affective computing” (i.e., the science of programming computers to recognize, interpret, process, and simulate human affects) originated with Rosalind Picard’s 1995 paper on affective computing (“Affective Computing,” MIT Technical Report #321, abstract, 1995). In the last fourteen years, it has been moving forward. Have you noticed that computer generated voice interactions, such as ordering a new prescription from your pharmacy on the phone, is sounding more natural, more human-like? If you combine this information with the concept that to be equivalent to a human mine, the computer would also need to be self conscious.

You may argue if it is possible possible for a machine to be self-conscious. Obviously, since we do not completely understand how the human brain processes consciousness to become self-aware, it is difficult to definitively argue that a machine can become self-conscious or obtain what is termed “artificial consciousness” (AC). This is why AI experts differ on this subject. Some AI experts (proponents) argue it is possible to build a machine with AC that emulates the inter-operation of various parts of the brain called “neural correlates of consciousness” (NCC).  NCC. Opponents argue that it is not possible because we do not fully understand the NCC. To my mind, they are both correct. It is not possible today to build a machine with a level of AC that emulates the self-consciousness of the human brain. However, I believe that in the future we will understand the human brain’s NCC inter-operation and build a machine that emulates it.

If in 2025 we indeed have computers equivalent to human minds, will they also be susceptible to mental illness? I think it is a possibility we should consider. We should consider it because the potential downside of a mentally ill computer may be enormous. For example, let’s assume we have a super intelligent computer managing the East Coast power grid. We replaced the human managers with a super intelligent computer. Now, assume the computer develops a psychotic disorder. Psychotic disorders involve distorted awareness and thinking. Two common symptoms of psychotic disorders are:

1. Hallucinations — the experience of images or sounds that are not real, such as hearing voices

2. Delusions — false beliefs that the ill person accepts as true, despite evidence to the contrary

What if our super intelligent computer managing the East Coast power grid believes (i.e., hallucinates) it has been given a command to destroy the grid and does so. This would cause immense human suffering and outrage. However, once the damage is done, what recourse do we have?

It is easy to see where I am going with this post. Today, there is no legislation that controls the level of intelligence we build into computers. There is not even legislation under discussion that would regulate the level of intelligence we build into computers.  I wrote my latest book, The Artificial Intelligence Revolution (2014), as a warning regarding the potential threats strong artificially intelligent machines (SAMs) may pose to humankind. My point is a simple one. While we humans are still at the top of the food chain, we need to take appropriate action to assure our own continued safety and survival. We need regulations similar to those imposed on above ground nuclear weapon testing. It is in our best interest and potentially critical to our survival.