This is an excerpt from my book, The Artificial Intelligence Revolution. Enjoy!
Affective computing is a relatively new science. It is the science of programming computers to recognize, interpret, process, and simulate human affects. The word “affects” refers to the experience or display of feelings or emotions.
While AI has achieved superhuman status in playing chess and quiz-show games, it does not have the emotional equivalence of a four-year-old child. For example, a four-year-old may love to play with toys. The child laughs with delight as the toy performs some function, such as a toy cat meowing when it is squeezed. If you take the toy away from the child, the child may become sad and cry. Computers are unable to achieve an emotional response similar to that of a four-year-old child. Computers do not exhibit joy or sadness. Some researchers believe this is actually a good thing. The intelligent machine processes and acts on information without coloring it with emotions. When you go to an ATM, you will not have to argue with the ATM regarding whether you can afford to make a withdrawal, and a robotic assistant will not lose its temper if you do not thank it after it performs a service. Significant human interactions with intelligent machines, however, will require that machines simulate human affects, such as empathy. In fact, some researchers argue that machines should be able to interpret the emotional state of humans and adapt their behavior accordingly, giving appropriate responses to those emotions. For example, if you are in a state of panic because your spouse is apparently having a heart attack, when you ask the machine to call for medical assistance, it should understand the urgency. In addition, it will be impossible for an intelligent machine to be truly equal to a human brain without the machine possessing human affects. For example, how could an artificial human brain write a romance novel without understanding love, hate, and jealousy?
Progress regarding the development of computers with human affects has been slow. In fact, this particular computer science originated with Rosalind Picard’s 1995 paper on affective computing (“Affective Computing,” MIT Technical Report #321, abstract, 1995). The single greatest problem involved in developing and programming computers to emulate the emotions of the human brain is that we do not fully understand how emotions are processed in the human brain. We cannot pinpoint a specific area of the brain and scientifically argue that it is responsible for specific human emotions, which has raised questions. Are human emotions byproducts of human intelligence? Are they the result of distributed functions within the human brain? Are they learned, or are we born with them? There is no universal agreement regarding the answers to these questions. Nonetheless, work on studying human affects and developing affective computing is continuing.
There are two major focuses in affective computing.
- Detecting and recognizing emotional information: How do intelligent machines detect and recognize emotional information? It starts with sensors, which capture data regarding a subject’s physical state or behavior. The information gathered is processed using several affective computing technologies, including speech recognition, natural-language processing, and facial-expression detection. Using sophisticated algorithms, the intelligent machine predicts the subject’s affective state. For example, the subject may be predicted to be angry or sad.
- Developing or simulating emotion in machines: While researchers continue to develop intelligent machines with innate emotional capability, the technology is not to the level where this goal is realizable. Current technology, however, is capable of simulating emotions. For example, when you provide information to a computer that is routing your telephone call, it may simulate gratitude and say, “Thank you.” This has proved useful in facilitating satisfying interactivity between humans and machines. The simulation of human emotions, especially in computer-synthesized speech, is improving continually. For example, you may have noticed when ordering a prescription by phone that the synthesized computer voice sounds more human as each year passes.
It is natural to ask which technologies are employed to get intelligent machines to detect, recognize, and simulate human emotions. I will discuss them shortly, but let me alert you to one salient feature. All current technologies are based on human behavior and not on how the human mind works. The main reason for this approach is that we do not completely understand how the human mind works regarding human emotions. This carries an important implication. Current technology can detect, recognize, simulate, and act accordingly based on human behavior, but the machine does not feel any emotion. No matter how convincing the conversation or interaction, it is an act. The machine feels nothing.