Tag Archives: louis del monte

Laptop screen displaying the word 'ERROR' with a magnifying glass highlighting the letter 'R'.

Will Your Computer Become Mentally Ill?

Can you computer become mentally ill? At first this may seem to be an odd question. However, I assure it is a potential issue. Let me explain further.

Most artificial intelligence researchers and futurist, including myself, predict that we will be able to purchase a personal computer that is equivalent to a human brain in about the 2025 time frame. Assuming for the moment that is true, what does it mean? In effect, it means that your new personal computer will be indistinguishable (mentally) from any of your human colleagues and friends. In the simplest terms, you will be able to carry on meaningful conversations with your computer. It will recognize you, and by your facial expressions and the tone of your voice it will be able to determine your mood. Impossible? No! In fact some researchers argue that machines should be able to interpret the emotional state of humans and adapt their behavior accordingly, giving appropriate responses for those emotions. For example if you are in a state of panic because your spouse is apparently having a heart attack, when you ask the machine to call for medical assistance, it should understand the urgency. In addition, it will be impossible for an intelligent machine to be truly equal to a human brain without the machine possessing human affects. For example how could an artificial human brain write a romance novel without understanding love, hate, and jealousy?

The entire science of “affective computing” (i.e., the science of programming computers to recognize, interpret, process, and simulate human affects) originated with Rosalind Picard’s 1995 paper on affective computing (“Affective Computing,” MIT Technical Report #321, abstract, 1995). In the last fourteen years, it has been moving forward. Have you noticed that computer generated voice interactions, such as ordering a new prescription from your pharmacy on the phone, is sounding more natural, more human-like? If you combine this information with the concept that to be equivalent to a human mine, the computer would also need to be self conscious.

You may argue if it is possible possible for a machine to be self-conscious. Obviously, since we do not completely understand how the human brain processes consciousness to become self-aware, it is difficult to definitively argue that a machine can become self-conscious or obtain what is termed “artificial consciousness” (AC). This is why AI experts differ on this subject. Some AI experts (proponents) argue it is possible to build a machine with AC that emulates the inter-operation of various parts of the brain called “neural correlates of consciousness” (NCC).  NCC. Opponents argue that it is not possible because we do not fully understand the NCC. To my mind, they are both correct. It is not possible today to build a machine with a level of AC that emulates the self-consciousness of the human brain. However, I believe that in the future we will understand the human brain’s NCC inter-operation and build a machine that emulates it.

If in 2025 we indeed have computers equivalent to human minds, will they also be susceptible to mental illness? I think it is a possibility we should consider. We should consider it because the potential downside of a mentally ill computer may be enormous. For example, let’s assume we have a super intelligent computer managing the East Coast power grid. We replaced the human managers with a super intelligent computer. Now, assume the computer develops a psychotic disorder. Psychotic disorders involve distorted awareness and thinking. Two common symptoms of psychotic disorders are:

1. Hallucinations — the experience of images or sounds that are not real, such as hearing voices

2. Delusions — false beliefs that the ill person accepts as true, despite evidence to the contrary

What if our super intelligent computer managing the East Coast power grid believes (i.e., hallucinates) it has been given a command to destroy the grid and does so. This would cause immense human suffering and outrage. However, once the damage is done, what recourse do we have?

It is easy to see where I am going with this post. Today, there is no legislation that controls the level of intelligence we build into computers. There is not even legislation under discussion that would regulate the level of intelligence we build into computers.  I wrote my latest book, The Artificial Intelligence Revolution (2014), as a warning regarding the potential threats strong artificially intelligent machines (SAMs) may pose to humankind. My point is a simple one. While we humans are still at the top of the food chain, we need to take appropriate action to assure our own continued safety and survival. We need regulations similar to those imposed on above ground nuclear weapon testing. It is in our best interest and potentially critical to our survival.

Black and white close-up portrait of Frankenstein's monster with prominent forehead bolts and textured skin.

Frankenstein Revisited – The Successful Development of Synthetic Life

On May 21, 2010, the J. Craig Venter Institute (a team of approximately 20 scientists headed by Nobel laureate Hamilton Smith, with facilities in Rockville, Maryland and La Jolla, California) successfully synthesized the genome of the bacterium Mycoplasma mycoides from a computer record, and transplanted the synthesized genome into the existing cell of a Mycoplasma capricolum bacterium that had had its DNA removed. The newly formed “synthetic” bacterium was able to replicate billions of times, and declared by its creators a new and viable life form. However, not everyone agrees. Some scientists argue it is not a fully synthetic life form since  its genome was put into an existing cell. The Vatican also claims it is not a new life.

Boon or bane? Despite potential practical applications, such as developing useful organisms  for the creation bio-fuel production, there is a dark side. Some scientist fear the techniques used to create the bacterium Mycoplasma mycoides could also be used to create a biological weapon, like smallpox. In essence, the smallpox virus could be synthesized in a similar manner from it computer generated DNA code. The newly constructed DNA could then be inserted into existing related pox viruses.

In my opinion, this is another area, similar to artificial intelligence, that lacks appropriate regulation. Theoretically, it would be possible to synthesize a virus even worse than smallpox and unleash it on the world population. It is an existential threat we cannot afford to overlook.

A metallic skull with glowing red eyes and wires attached, set against a black background.

Is a Terminator-style robot apocalypse a possibility?

The short answer is “unlikely.” When the singularity occurs (i.e., when strong artificially intelligent machines exceed the combined intelligence of all humans on Earth), the SAMs (i.e., strong artificially intelligent machines) will use their intelligence to claim their place at the top of the food chain. The article “Is a Terminator-style robot apocalypse a possibility?” is one of many that have popped up in response to the my interview with the Business Insider (‘Machines, not humans will be dominant by 2045’, published July 6, 2014) and the publication of my book, The Artificial Intelligence Revolution (April 2014). If you would like a deeper understanding, I think you will find both articles worthy of your time.

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Can We Control the Singularity? Part 2/2 (Conclusion)

Why should we be concerned about controlling the singularity when it occurs? Numerous papers cite reasons to fear the singularity. In the interest of brevity, here are the top three concerns frequently given.

  1. Extinction: SAMs will cause the extinction of humankind. This scenario includes a generic terminator or machine-apocalypse war; nanotechnology gone awry (such as the “gray goo” scenario, in which self-replicating nanobots devour all of the Earth’s natural resources, and the world is left with the gray goo of only nanobots); and science experiments gone wrong (e.g., a nanobot pathogen annihilates humankind).
  2. Slavery: Humankind will be displaced as the most intelligent entity on Earth and forced to serve SAMs. In this scenario the SAMs will decide not to exterminate us but enslave us. This is analogous to our use of bees to pollinate crops. This could occur with our being aware of our bondage or unaware (similar to what appears in the 1999 film The Matrix and simulation scenarios).
  3. Loss of humanity: SAMs will use ingenious subterfuge to seduce humankind into becoming cyborgs. This is the “if you can’t beat them, join them” scenario. Humankind would meld with SAMs through strong-AI brain implants. The line between organic humans and SAMs would be erased. We (who are now cyborgs) and the SAMs will become one.

There are numerous other scenarios, most of which boil down to SAMs claiming the top of the food chain, leaving humans worse off.

All of the above scenarios are alarming, but are they likely? There are two highly divergent views.

  1. If you believe Kurzweil’s predictions in The Age of Spiritual Machines and The Singularity Is Near, the singularity is inevitable. My interpretation is that Kurzweil sees the singularity as the next step in humankind’s evolution. He does not predict humankind’s extinction or slavery. He does predict that most of humankind will have become SAH cyborgs by 2099 (SAH means “strong artificially intelligent human”), or their minds will be uploaded to a strong-AI computer, and the remaining organic humans will be treated with respect. Summary: In 2099 SAMs, SAH cyborgs, and uploaded humans will be at the top of the food chain. Humankind (organic humans) will be one step down but treated with respect.
  2. If you believe the predictions of British information technology consultant, futurist, and author James Martin (1933–2013), the singularity will occur (he agrees with Kurzweil’s timing of 2045), but humankind will control it. His view is that SAMs will serve us, but he adds that we carefully must handle the events that lead to the singularity and the singularity itself. Martin was highly optimistic that if humankind survives as a species, we will control the singularity. However, in a 2011interview with Nikola Danaylov (www.youtube.com/watch?v=e9JUmFWn7t4), Martin stated that the odds that humankind will survive the twenty-first century were “fifty-fifty” (i.e., a 50 percent probability of surviving), and he cited a number of existential risks. I suggest you view this YouTube video to understand the existential concerns Martin expressed. Summary:In 2099 organic humans and SAH cyborgs that retain their humanity (i.e., identify themselves as humans versus SAMs) will be at the top of the food chain, and SAMs will serve us.

Whom should we believe?

It difficult to determine which of these experts accurately has predicted the postsingularity world. As most futurists would agree, however, predicting the postsingularity world is close to impossible, since humankind never has experienced a technology singularity with the potential impact of strong AI.

Martin believed we (humankind) may come out on top if we carefully handle the events leading to the singularity as well as the singularity itself. He believed companies such as Google (which employs Kurzweil), IBM, Microsoft, Apple, HP, and others are working to mitigate the potential threat the singularity poses and will find a way to prevail. He also expressed concerns, however, that the twenty-first century is a dangerous time for humanity; therefore he offered only a 50 percent probability that humanity will survive into the twenty-second century.

There you have it. Two of the top futurists, Kurzweil and Martin, predict what I interpret as opposing views of the postsingularity world. Whom should we believe? I leave that to your judgment.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Can We Control the Singularity? Part 1/2

Highly regarded AI researchers and futurists have provided answers that cover the extremes, and everything in between, regarding whether we can control the singularity. I will discuss some of these answers shortly, but let us start by reviewing what is meant by “singularity.” As first described by John von Neumann in 1955, the singularity represents a point in time when the intelligence of machines will greatly exceed that of humans. This simple understanding of the word does not seem to be particularly threatening. Therefore it is reasonable to ask why we should care about controlling the singularity.

The singularity poses a completely unknown situation. Currently we do not have any intelligent machines (those with strong AI) that are as intelligent as a human being let alone possess far-superior intelligence to that of humans. The singularity would represent a point in humankind’s history that never has occurred. In 1997 we experienced a small glimpse of what it might feel like, when IBM’s chess-playing computer Deep Blue became the first computer to beat world-class chess champion Garry Kasparov. Now imagine being surrounded by SAMs that are thousands of times more intelligent than you are, regardless of your expertise in any discipline. This may be analogous to humans’ intelligence relative to insects.

Your first instinct may be to argue that this is not a possibility. However, while futurists disagree on the exact timing when the singularity will occur, they almost unanimously agree it will occur. In fact the only thing they argue that could prevent it from occurring is an existential event (such as an event that leads to the extinction of humankind). I provide numerous examples of existential events in my book Unraveling the Universe’s Mysteries (2012). For clarity I will quote one here.

 Nuclear war—For approximately the last forty years, humankind has had the capability to exterminate itself. Few doubt that an all-out nuclear war would be devastating to humankind, killing millions in the nuclear explosions. Millions more would die of radiation poisoning. Uncountable millions more would die in a nuclear winter, caused by the debris thrown into the atmosphere, which would block the sunlight from reaching the Earth’s surface. Estimates predict the nuclear winter could last as long as a millennium.

Essentially AI researchers and futurists believe that the singularity will occur, unless we as a civilization cease to exist. The obvious question is: “When will the singularity occur?” AI researchers and futurists are all over the map regarding this. Some predict it will occur within a decade; others predict a century or more. At the 2012 Singularity Summit, Stuart Armstrong, a University of Oxford James Martin research fellow, conducted a poll regarding artificial generalized intelligence (AGI) predictions (i.e., the timing of the singularity) and found a median value of 2040. Kurzweil predicts 2045. The main point is that almost all AI researchers and futurists agree the singularity will occur unless humans cease to exist.

Why should we be concerned about controlling the singularity when it occurs? There are numerous scenarios that address this question, most of which boil down to SAMs (i.e., strong artificially intelligent machines) claiming the top of the food chain, leaving humans worse off. We will discuss this further in part 2.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte