Tag Archives: louis del monte

Digital illustration of a human head with glowing neural connections representing brain activity and intelligence.

Will Strong Artificially Intelligent Machines Become Self-Conscious? Part 2/2 (Conclusion)

Part 1 of  this post ended with an important question: “How can we determine whether an intelligent machine has become conscious (self-aware)?” We will address this question in this post, along with some  ethical dilemmas.

We do not have a way yet to determine whether even another human is self-aware. I only know that I am self-aware. I assume that since we share the same physiology, including similar human brains, you are probably self-aware as well. However, even if we discuss various topics, and I conclude that your intelligence is equal to mine, I still cannot prove you are self-aware. Only you know whether you are self-aware.

The problem becomes even more difficult when dealing with an intelligent machine. The gold standard for an intelligent machine’s being equal to the human mind is the Turing test, which I discuss in chapter 5 of my book, The Artificial Intelligence Revolution. (If you are not familiar with the Turing test, a simple Google search will provide numerous sources to learn about it.) As of today no intelligent machine can pass the Turing test unless its interactions are restricted to a specific topic, such as chess. However, even if an intelligent machine does pass the Turing test and exhibits strong AI, how can we be sure it is self-aware? Intelligence may be a necessary condition for self-awareness, but it may not be sufficient. The machine may be able to emulate consciousness to the point that we conclude it must be self-aware, but that does not equal proof.

Even though other tests, such as the ConsScale test, have been proposed to determine machine consciousness, we still come up short. The ConsScale test evaluates the presence of features inspired by biological systems, such as social behavior. It also measures the cognitive development of an intelligent machine. This is based on the assumption that intelligence and consciousness are strongly related. The community of AI researchers, however, does not universally accept the ConsScale test as proof of consciousness. In the final analysis, I believe most AI researchers agree on only two points:

  1. There is no widely accepted empirical definition of consciousness (self-awareness).
  2. A test to determine the presence of consciousness (self-awareness) may be impossible, even if the subject being tested is a human being.

The above two points, however, do not rule out the possibility of intelligent machines becoming conscious and self-aware. They merely make the point that it will be extremely difficult to prove consciousness and self-awareness.

Ray Kurzweil predicts that by 2029 reverse engineering of the human brain will be completed, and nonbiological intelligence will combine the subtlety and pattern-recognition strength of human intelligence with the speed, memory, and knowledge sharing of machine intelligence (The Age of Spiritual Machines, 1999). I interpret this to mean that all aspects of the human brain will be replicated in an intelligent machine, including artificial consciousness. At this point intelligent machines either will become self-aware or emulate self-awareness to the point that they are indistinguishable from their human counterparts.

Self-aware intelligent machines being equivalent to human minds presents humankind with two serious ethical dilemmas.

  1. Should self-aware machines be considered a new life-form?
  2. Should self-aware machines have “machine rights” similar to human rights?

Since a self-aware intelligent machine that is equivalent to a human mind is still a theoretical subject, the ethics addressing the above two questions have not been discussed or developed to any great extent. Kurzweil, however, predicts that self-aware intelligent machines on par with or exceeding the human mind eventually will obtain legal rights by the end of the twenty-first century. Perhaps, he is correct, but I think we need to be extremely careful regarding what legal rights self-aware intelligent machines are granted. If they are given rights on par with humans, we may have situation where the machines become the dominant species on this planet and pose a potential threat to humankind. More about this in upcoming posts.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A stylized blue and white vintage microphone with musical notes in the background.

“The Artificial Intelligence Revolution” Interview Featured On Blog Talk Radio

My interview on Johnny Tan’s program (From My Mama’s Kitchen®) is featured as one of “Today’s Best” on Blog Talk Radio’s home page. This is a great honor. Below is the player from our interview. It displays a slide show of my picture as well as the book cover while it plays the interview.

Discover Moms and Family Internet Radio with FMMK Talk Radio on BlogTalkRadio
A humanoid robot with an extended hand under the text 'The Artificial Intelligence Revolution' questioning AI's role in serving or replacing humans.

Louis Del Monte FMMK Talk Radio Interview on The Artificial Intelligence Revolution

You can listen and/or download my interview with Johnny Tan of FMMK talk radio discussing my new book, The Artificial Intelligence Revolution. We discuss and explore the potential benefits and threats strong artificially intelligent machines pose to humankind.

Click here to listen or download the interview “The Artificial Intelligence Revolution”

Digital illustration of a human head with glowing neural connections representing brain activity and intelligence.

Will Strong Artificially Intelligent Machines Become Self-Conscious? Part 1/2

A generally accepted definition is that a person is conscious if that person is aware of his or her surroundings. If you are self-aware, it means you are self-conscious. In other words you are aware of yourself as an individual or of your own being, actions, and thoughts. To understand this concept, let us start by exploring how the human brain processes consciousness. To the best of our current understanding, no one part of the brain is responsible for consciousness. In fact neuroscience (the scientific study of the nervous system) hypothesizes that consciousness is the result of the interoperation of various parts of the brain called “neural correlates of consciousness” (NCC). This idea suggests that at this time we do not completely understand how the human brain processes consciousness or becomes self-aware.

Is it possible for a machine to be self-conscious? Obviously, since we do not completely understand how the human brain processes consciousness to become self-aware, it is difficult to definitively argue that a machine can become self-conscious or obtain what is termed “artificial consciousness” (AC). This is why AI experts differ on this subject. Some AI experts (proponents) argue it is possible to build a machine with AC that emulates the interoperation (i.e., it works like the human brain) of the NCC. Opponents argue that it is not possible because we do not fully understand the NCC. To my mind they are both correct. It is not possible today to build a machine with a level of AC that emulates the self-consciousness of the human brain. However, I believe that in the future we will understand the human brain’s NCC interoperation and build a machine that emulates it. Nevertheless this topic is hotly debated.

Opponents argue that many physical differences exist between natural, organic systems and artificially constructed (e.g., computer) systems that preclude AC. The most vocal critic who holds this view is American philosopher Ned Block (1942– ), who argues that a system with the same functional states as a human is not necessarily conscious.

The most vocal proponent who argues that AC is plausible is Australian philosopher David Chalmers (1966– ). In his unpublished 1993 manuscript “A Computational Foundation for the Study of Cognition,” Chalmers argues that it is possible for computers to perform the right kinds of computations that would result in a conscious mind. He reasons that computers perform computations that can capture other systems’ abstract causal organization. Mental properties are abstract causal organization. Therefore computers that run the right kind of computations will become conscious.

This is a good place for us to ask an important question: “How can we determine whether an intelligent machine has become conscious (self-aware)?” We will address this question in the next post (Part 2).

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Silhouette of a human head filled with interconnected gears representing thinking and mental processes.

How Do Computers Learn (Self-Learning Machines)?

How is it possible to wire together microprocessors, hard drives, memory chips, and numerous other electronic hardware components and create a machine that will teach itself to learn?

Let us start by defining machine learning. The most widely accepted definition comes from Tom M. Mitchell, a American computer scientist and E. Fredkin University Professor at Carnegie Mellon University. Here is his formal definition: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” In simple terms machine learning requires a machine to learn similar to the way humans do, namely from experience, and continue to improve its performance as it gains more experience.

Machine learning is a branch of AI; it utilizes algorithms that improve automatically through experience. Machine learning also has been a focus of AI research since the field’s inception. There are numerous computer software programs, known as machine-learning algorithms, that use various computational techniques to predict outcomes of new, unseen experiences. The algorithms’ performance is a branch of theoretical computer science known as “computational learning theory.” What this means in simple terms is that an intelligent machine has in its memory data that relates to a finite set of experiences. The machine-learning algorithms (i.e., software) access this data for its similarity to a new experience and use a specific algorithm (or combination of algorithms) to guide the machine to predict an outcome of this new experience. Since the experience data in the machine’s memory is limited, the algorithms are unable to predict outcomes with certainty. Instead they associate a probability to a specific outcome and act in accordance with the highest probability. Optical character recognition is an example of machine learning. In this case the computer recognizes printed characters based on previous examples. As anyone who has ever used an optical character-recognition program knows, however, the programs are far from 100 percent accurate. In my experience the best case is a little more than 95 percent accurate when the text is clear and uses a common font.

There are eleven major machine-learning algorithms and numerous variations of these algorithms. To study and understand each would be a formidable task. Fortunately, though, machine-learning algorithms fall into three major classifications. By understanding these classifications, along with representative examples of algorithms, we can gain significant insight into the science of machine learning. Therefore let us review the three major classifications and some representative examples.

  1. Supervised learning: This class of algorithms infers a function (a way of mapping or relating an input to an output) from training data, which consists of training examples. Each example consists of an input object and a desired output value. Ideally the inferred function (generalized from the training data) allows the algorithm to analyze new data (unseen instances/inputs) and map it to (i.e., predict) a high-probability output.
  2. Unsupervised learning: This class of algorithms seeks to find hidden structures (patterns in data) in a stream of input (unlabeled data). Unlike in supervised learning, the examples presented to the learner are unlabeled, which makes it impossible to assign an error or reward to a potential solution.
  3. Reinforcement learning: Reinforcement learning was inspired by behaviorist psychology. It focuses on which actions an agent (an intelligent machine) should take to maximize a reward (for example a numerical value associated with utility). In effect the agent receives rewards for good responses and punishment for bad ones. The algorithms for reinforcement learning require the agent to take discrete time steps and calculate the reward as a function of having taken that step. At this point the agent takes another time step and again calculates the reward, which provides feedback to guide the agent’s next action. The agent’s goal is to collect as much reward as possible.

Machine learning is similar to human learning in many respects. The most difficult issue in machine learning is generalization or what is often referred to as abstraction. This is simply the ability to determine the features and structures of an object (i.e., data) relevant to solving the problem. Humans are excellent when it comes to abstracting the essence of an object. For example, regardless of the breed or type of dog, whether we see a small, large, multicolor, long-hair, short-hair, large-nose, or short-nose animal, we immediately recognize that the animal is a dog. Most four-year-old children immediately recognize dogs. However, most intelligent agents have a difficult time with generalization and require sophisticated computer programs to enable them to generalize.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte