Category Archives: Artificial Intelligence

Silhouette of a human head filled with interconnected gears representing thinking and mental processes.

How Do Computers Learn (Self-Learning Machines)?

How is it possible to wire together microprocessors, hard drives, memory chips, and numerous other electronic hardware components and create a machine that will teach itself to learn?

Let us start by defining machine learning. The most widely accepted definition comes from Tom M. Mitchell, a American computer scientist and E. Fredkin University Professor at Carnegie Mellon University. Here is his formal definition: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” In simple terms machine learning requires a machine to learn similar to the way humans do, namely from experience, and continue to improve its performance as it gains more experience.

Machine learning is a branch of AI; it utilizes algorithms that improve automatically through experience. Machine learning also has been a focus of AI research since the field’s inception. There are numerous computer software programs, known as machine-learning algorithms, that use various computational techniques to predict outcomes of new, unseen experiences. The algorithms’ performance is a branch of theoretical computer science known as “computational learning theory.” What this means in simple terms is that an intelligent machine has in its memory data that relates to a finite set of experiences. The machine-learning algorithms (i.e., software) access this data for its similarity to a new experience and use a specific algorithm (or combination of algorithms) to guide the machine to predict an outcome of this new experience. Since the experience data in the machine’s memory is limited, the algorithms are unable to predict outcomes with certainty. Instead they associate a probability to a specific outcome and act in accordance with the highest probability. Optical character recognition is an example of machine learning. In this case the computer recognizes printed characters based on previous examples. As anyone who has ever used an optical character-recognition program knows, however, the programs are far from 100 percent accurate. In my experience the best case is a little more than 95 percent accurate when the text is clear and uses a common font.

There are eleven major machine-learning algorithms and numerous variations of these algorithms. To study and understand each would be a formidable task. Fortunately, though, machine-learning algorithms fall into three major classifications. By understanding these classifications, along with representative examples of algorithms, we can gain significant insight into the science of machine learning. Therefore let us review the three major classifications and some representative examples.

  1. Supervised learning: This class of algorithms infers a function (a way of mapping or relating an input to an output) from training data, which consists of training examples. Each example consists of an input object and a desired output value. Ideally the inferred function (generalized from the training data) allows the algorithm to analyze new data (unseen instances/inputs) and map it to (i.e., predict) a high-probability output.
  2. Unsupervised learning: This class of algorithms seeks to find hidden structures (patterns in data) in a stream of input (unlabeled data). Unlike in supervised learning, the examples presented to the learner are unlabeled, which makes it impossible to assign an error or reward to a potential solution.
  3. Reinforcement learning: Reinforcement learning was inspired by behaviorist psychology. It focuses on which actions an agent (an intelligent machine) should take to maximize a reward (for example a numerical value associated with utility). In effect the agent receives rewards for good responses and punishment for bad ones. The algorithms for reinforcement learning require the agent to take discrete time steps and calculate the reward as a function of having taken that step. At this point the agent takes another time step and again calculates the reward, which provides feedback to guide the agent’s next action. The agent’s goal is to collect as much reward as possible.

Machine learning is similar to human learning in many respects. The most difficult issue in machine learning is generalization or what is often referred to as abstraction. This is simply the ability to determine the features and structures of an object (i.e., data) relevant to solving the problem. Humans are excellent when it comes to abstracting the essence of an object. For example, regardless of the breed or type of dog, whether we see a small, large, multicolor, long-hair, short-hair, large-nose, or short-nose animal, we immediately recognize that the animal is a dog. Most four-year-old children immediately recognize dogs. However, most intelligent agents have a difficult time with generalization and require sophisticated computer programs to enable them to generalize.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Digital illustration of a human head with a microchip embedded in the forehead, symbolizing AI or brain-computer interface technology.

When Will A Computer Equal a Human Brain?

If we want to view the human brain in terms of a computer, one approach would be to take the number of calculations per second that an average human brain is able to process and compare that with today’s best computers. This is not an exact science. No one really knows how many calculations per second an average human brain is able to process, but some estimates (www.rawstory.com/rs/2012/06/18/earths-supercomputing-power-surpasses-human-brain-three-times-over) suggest it is in the order of 36.8 petaflops of data (a petaflop is equal to one quadrillion calculations per second). Let us compare the human brain’s processing power with the best current computers on record, listed below by year and processing-power achievement.

  • June 18, 2012: IBM’s Sequoia supercomputer system, based at the US Lawrence Livermore National Laboratory (LLNL), reached sixteen petaflops, setting the world record and claiming first place in the latest TOP500 list (a list of the top five hundred computers ranked by a benchmark known as LINPACK (related to their ability to solve a set of linear equations) to decide whether they qualify for the TOP500.
  • November 12, 2012: The TOP500 list certified Titan as the world’s fastest supercomputer per the LINPACK benchmark, at 17.59 petaflops. Cray Incorporated, at the Oak Ridge National Laboratory, developed it.
  • June 10, 2013: China’s Tianhe-2 was ranked the world’s fastest supercomputer, with a record of 33.86 petaflops.

Using Moore’s law (i.e., computer processing power doubles every eighteen months), we can extrapolate that in terms of raw processing power (petaflops), computer processing power will meet or exceed that of the human mind by about 2015 to 2017. This does not mean that by 2017 we will have a computer that is equal to the human mind. Software plays a key role in both processing power (MIPS) and AI.

To understand the critical role that software plays, we must understand what we are asking AI to accomplish in emulating human intelligence. Here is a thumbnail sketch of the capabilities that researchers consider necessary.

  • Reasoning: step-by-step reasoning that humans use to solve problems or make logical decisions
  • Knowledge: extensive knowledge, similar to what an educated human would possess
  • Planning: the ability to set goals and achieve them
  • Learning: the ability to acquire knowledge through experience and use that knowledge to improve
  • Language: the ability to understand the languages humans speak and write
  • Moving: the ability to move and navigate, including knowing where it is relative to other objects and obstacles
  • Manipulation: the ability to secure and handle an object
  • Vision: the ability to analyze visual input, including facial and object recognition
  • Social intelligence: the ability to recognize, interpret, and process human psychology and emotions and respond appropriately
  • Creativity: the ability to generate outputs that can be considered creative or the ability to identify and assess creativity

This list makes clear that raw computer processing and sensing are only two elements in emulating the human mind. Obviously software is also a critical element. Each of the capabilities delineated above requires a computer program. To emulate a human mind, the computer programs would need to act both independently and interactively, depending on the specific circumstance.

In terms of raw computer processing, with the development of China’s Tianhe-2 computer, we are on the threshold of having a computer with the raw processing power of a human mind. The development of a computer that will emulate a human mind, however, may still be one, two, or even more decades away, due to software and sensing requirements.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

Can an Artificially Intelligent Machine Have Human-like Emotions?

Affective computing is a relatively new science. It is the science of programming computers to recognize, interpret, process, and simulate human affects. The word “affects” refers to the experience or display of feelings or emotions.

While AI has achieved superhuman status in playing chess and quiz-show games, it does not have the emotional equivalence of a four-year-old child. For example a four-year-old may love to play with toys. The child laughs with delight as the toy performs some function, such as a toy cat meowing when it is squeezed. If you take the toy away from the child, the child may become sad and cry. Computers are unable to achieve any emotional response similar to that of a four-year-old child. Computers do not exhibit joy or sadness. Some researchers believe this is actually a good thing. The intelligent machine processes and acts on information without coloring it with emotions. When you go to an ATM, you will not have to argue with the ATM regarding whether you can afford to make a withdrawal, and a robotic assistant will not lose its temper if you do not thank it after it performs a service. Highly meaningful human interactions with intelligent machines, however, will require that machines simulate human affects, such as empathy. In fact some researchers argue that machines should be able to interpret the emotional state of humans and adapt their behavior accordingly, giving appropriate responses for those emotions. For example if you are in a state of panic because your spouse is apparently having a heart attack, when you ask the machine to call for medical assistance, it should understand the urgency. In addition it will be impossible for an intelligent machine to be truly equal to a human brain without the machine possessing human affects. For example how could an artificial human brain write a romance novel without understanding love, hate, and jealousy?

Progress concerning the development of computers with human affects has been slow. In fact this particular computer science originated with Rosalind Picard’s 1995 paper on affective computing (“Affective Computing,” MIT Technical Report #321, abstract, 1995). The single greatest problem involved in developing and programming computers to emulate the emotions of the human brain is that we do not fully understand how emotions are processed in the human brain. We are unable to pinpoint a specific area of the brain and scientifically argue that it is responsible for specific human emotions, which has raised questions. Are human emotions byproducts of human intelligence? Are they the result of distributed functions within the human brain? Are they learned, or are we born with them? There is no universal agreement regarding the answers to these questions. Nonetheless work on studying human affects and developing affective computing is continuing.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Side profile of a futuristic humanoid robot with a white face and visible mechanical components against a pale blue background.

Is Strong Artificial Intelligence a New Life-Form? – Part 4/4 (Conclusion)

In our previous posts, we discussed that there is an awareness that SAMs (i.e., strong artificially intelligent machines) may become hostile toward humans, and AI remains an unregulated branch of engineering. The computer you buy eighteen months from now will be twice as capable as the one you can buy today.

Where does this leave us regarding the following questions?

  • Is strong AI a new life-form?
  • Should we afford these machines “robot” rights?

In his 1990 book The Age of Intelligent Machines, Kurzweil predicted that in 2099 organic humans will be protected from extermination and respected by strong AI, regardless of their shortcomings and frailties, because they gave rise to the machines. To my mind the possibility of this scenario eventually playing out is questionable. Although I believe a case can be made that strong AI is a new life-form, we need to be extremely careful with regard to granting SAMs rights, especially rights similar to those possessed by human. Anthony Berglas expresses it best in his 2008 book Artificial Intelligence Will Kill Our Grandchildren, in which he notes:

  • There is no evolutionary motivation for AI to be friendly to humans.
  • AI would have its own evolutionary pressures (i.e., competing with other AIs for computer hardware and energy).
  • Humankind would find it difficult to survive a competition with more intelligent machines.

Based on the above, carefully consider the following question. Should SAMs be granted machine rights? Perhaps in a limited sense, but we must maintain the right to shut down the machine as well as limit its intelligence. If our evolutionary path is to become cyborgs, this is a step we should take only after understanding the full implications. We need to decide when (under which circumstances), how, and how quickly we take this step. We must control the singularity, or it will control us. Time is short because the singularity is approaching with the stealth and agility of a leopard stalking a lamb, and for the singularity, the lamb is humankind.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte