Affective computing is a relatively new science. It is the science of programming computers to recognize, interpret, process, and simulate human affects. The word “affects” refers to the experience or display of feelings or emotions.

While AI has achieved superhuman status in playing chess and quiz-show games, it does not have the emotional equivalence of a four-year-old child. For example a four-year-old may love to play with toys. The child laughs with delight as the toy performs some function, such as a toy cat meowing when it is squeezed. If you take the toy away from the child, the child may become sad and cry. Computers are unable to achieve any emotional response similar to that of a four-year-old child. Computers do not exhibit joy or sadness. Some researchers believe this is actually a good thing. The intelligent machine processes and acts on information without coloring it with emotions. When you go to an ATM, you will not have to argue with the ATM regarding whether you can afford to make a withdrawal, and a robotic assistant will not lose its temper if you do not thank it after it performs a service. Highly meaningful human interactions with intelligent machines, however, will require that machines simulate human affects, such as empathy. In fact some researchers argue that machines should be able to interpret the emotional state of humans and adapt their behavior accordingly, giving appropriate responses for those emotions. For example if you are in a state of panic because your spouse is apparently having a heart attack, when you ask the machine to call for medical assistance, it should understand the urgency. In addition it will be impossible for an intelligent machine to be truly equal to a human brain without the machine possessing human affects. For example how could an artificial human brain write a romance novel without understanding love, hate, and jealousy?

Progress concerning the development of computers with human affects has been slow. In fact this particular computer science originated with Rosalind Picard’s 1995 paper on affective computing (“Affective Computing,” MIT Technical Report #321, abstract, 1995). The single greatest problem involved in developing and programming computers to emulate the emotions of the human brain is that we do not fully understand how emotions are processed in the human brain. We are unable to pinpoint a specific area of the brain and scientifically argue that it is responsible for specific human emotions, which has raised questions. Are human emotions byproducts of human intelligence? Are they the result of distributed functions within the human brain? Are they learned, or are we born with them? There is no universal agreement regarding the answers to these questions. Nonetheless work on studying human affects and developing affective computing is continuing.

There are two major focuses in affective computing.

1. Detecting and recognizing emotional information: How do intelligent machines detect and recognize emotional information? It starts with sensors, which capture data regarding a subject’s physical state or behavior. The information gathered is processed using several affective computing technologies, including speech recognition, natural-language processing, and facial-expression detection. Using sophisticated algorithms, the intelligent machine predicts the subject’s affective state. For example the subject may be predicted to be angry or sad.

2. Developing or simulating emotion in machines: While researchers continue to develop intelligent machines with innate emotional capability, the technology is not to the level where this goal is achievable. Current technology, however, is capable of simulating emotions. For example when you provide information to a computer that is routing your telephone call, it may simulate gratitude and say, “Thank you.” This has proved useful in facilitating satisfying interactivity between humans and machines. The simulation of human emotions, especially in computer-synthesized speech, is improving continually. For example you may have noticed when ordering a prescription by phone that the synthesized computer voice sounds more human as each year passes.

All current technologies to detect, recognize, and simulate human emotions are based on human behavior and not on how the human mind works. The main reason for this approach is that we do not completely understand how the human mind works when it comes to human emotions. This carries an important implication. Current technology can detect, recognize, simulate, and act accordingly based on human behavior, but the machine does not feel any emotion. No matter how convincing the conversation or interaction, it is an act. The machine feels nothing. However, intelligent machines using simulated human affects have found numerous applications in the fields of e-learning, psychological health services, robotics, and digital pets.

It is only natural to ask, “Will an intelligent machine ever feel human affects?” This question raises a broader question: “Will an intelligent machine ever be able to completely replicate a human mind?” We will address this question in part 2.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte