Why can robots express feelings and emotions

Do robots need emotions?

In the long run, we won't have compassionate robots

"Robots with emotions" have been haunting the media for about two years. I am often asked when such machines will come onto the market and whether it would not be wonderful or terrifying. Here is an attempt at an answer.

Moore's Law has given us ever smaller and faster computer chips, so that more and more information technology is embedded in many products. The process of automation is taking place so lively that a wave of euphoria spreads in some corners.

A modern hype is to announce the future robotic servants today, i.e. robots that will fulfill all of our wishes in the household, but also in the hospital. In Japan in particular, the aging of the population is regularly lamented: In the absence of a younger workforce, the care of the elderly should be taken over by robots.1 So that older Japanese do not immediately reject this horror, it is sworn that the care robots will read and express emotions. The "emotional robot" that can empathize and possibly also mourn would then only be a matter of time.

In mid-2015 in Japan, such an "emotional robot" sold out online within just a minute. The campaign was a kind of beta test of the acceptance of the new robot from Aldebaran, which sells the popular Nao robots worldwide. So that the buyers did not doubt the emotionality of the robot, it was revealed that this is generated by an "endocrinal, multi-layered neural network" (!).

The Nao itself has long been praised - for six years - as the first robot that can "detect and express" emotions.2 When the Nao is "happy", it raises its arms to hug its owner, who is also happy. The company Emotion Robotics, for example, sells the Nao including software for such developments in the field of social robotics. I stopped counting the number of times the Nao was portrayed in the media as the first emotional robot.

Affective Computing

When it comes to computers interacting with people, there are two major construction sites: on the one hand, recognizing the state of mind of people through video cameras or other sensors, on the other hand, expressing feelings through avatars or synthesized voices. The term "Affective Computing", which can look back on a thirty-year history, is often used for this research

Many will know the robot Kismet, which looks like something out of a science fiction film. Kismet became famous because he (or she?) Could express boredom, anger or interest through eyebrows, ears and eyes. Since then, kismetic robots have been recreated in all variations.

But do I really want my computer to recognize my feelings? It is often said how wonderful it would be if the computer notices that the student is bored and then suggests something else to do. Or when the computer notices that you are sad and then comes up with something suitable. As a result, Apple recently bought Emotient, a company that specializes in recognizing emotions. Microsoft announced its "emotion-sensing platform" back in 2015 and Google - the company that knows everything about us - has already had corresponding apps developed for Google Glass. Emotions should not only be recorded with video cameras, but also with other sensors such as microphones or via pulse measurement (thanks to a networked wristwatch).

Imagine, when you are sitting at the computer in the near future, the Internet providers not only recognize what information I am downloading, but also how my pulse changes as a result and whether I am feeling joy or frustration. Affectiva, owned by Rosalind Picard, a pioneer in affective computing, is developing such tools.

The second research construction site is expressing emotions. Should a computer show us that it empathizes? Some researchers proclaim that robots will experience real emotions in about 50 years - although betting on 50 years in the future is always a safe bet as few will remember.

The "Foundation for Effective Altruism" goes much further as it proposes the following measures for future research4:

Research projects that develop or test self-optimizing neuromorphic, i.e., brain-analogous AI architectures that are more likely to have a capacity for suffering, should be placed under the supervision of ethics committees (analogous to the animal experimentation committees).

The colleagues galloped off a bit.

Robots and emotions

Aaron Sloman and Monic Croucher do not ask whether robots will have emotions, they rather explain "Why robots will have emotions" 5. But if you read Sloman's theses (which he has expressed in many other publications), you immediately see that his definition of emotions is more of an operational one. In his "Grammar of Emotions" there are the following emotional states:

  • We have motivations that guide our actions. If we cannot follow this motivation, a disorder arises that interrupts our normal thinking.
  • The disturbance can only be frustration or guidance to new action.
  • Some emotional states generate an automatic response that the conscious mind partially bypasses, e.g. an escape reaction in the event of danger.
  • As with computers, emotions can trigger "interrupts" and completely change behavior.
  • Emotions do not need to be recognized as such, it is enough if they do the right thing in the given situation.

It is not difficult to see that all these states and state transitions can be post-implemented in a computer or robot. Sloman's suggestion is to build robots that spontaneously choose motivations (e.g. cleaning the apartment), but whose "train of thought" is interrupted and rescued in the event of unforeseen difficulties (the robot is standing in front of the stairs to the basement). Such robots should also feel curiosity, so that when the apartment has already been cleaned, they can continue to navigate through the house in order to collect more information about the surroundings.

All of this is feasible, but to call such robot states emotions goes beyond what we normally understand by emotions. Emotions are usually associated with physical changes, which can be extreme. High levels of adrenaline escaping a brawl is something that takes several hours to recover from.

You can therefore look at psychologists and neurobiologists to see what they mean by emotions. Joseph Le Doux, one of the popes of fear research, makes a sharp distinction between feelings and emotions.6 For him, feelings are consciously experienced emotions, i.e. one must first be conscious. When it comes to emotions, however, he does not want to draw a sharp distinction between humans and other animals. That is why he suggests talking about "survival circuits" instead. 7

No less than Charles Darwin already dealt with the expression of emotions in humans and animals about 150 years ago.8 Emotions are therefore to be interpreted in an evolutionary way. They serve the survival of the species and are, so to speak, a deadly serious matter - it depends on them whether we are still there tomorrow. Many emotions are therefore "implemented" as physical reflexes in animals.

The monkey, which instinctively jumps up as soon as a snake is seen on the ground, only then internalizes that it has seen a snake. The adrenaline rush during fight or flight makes all physical reserves available immediately, at the expense of later well-being. Fear is so important because it avoids the hopeless fight between the hunter and the prey. As the New Scientist put it, figs are more likely to survive. 9

So there are very different theories of emotion and this is not intended to bother the reader - suffice it to say that there are two main thrusts. The physically-oriented approaches emphasize the metabolic changes in the sensation of emotions, the behavior-based approaches rather observe the effect on behavior. In both cases, emotions have an evolutionary basis.

If one follows Le Doux first, then it would be clear that neither computers nor robots can perceive any kind of emotions. The necessary cognition for this, the conscious feeling for what has been experienced, is missing. However, Le Doux "survival circuits" sound a bit like Sloman's emotion grammars.

Le Doux does not want to rule out that reptiles, for example, feel something like fear. But since we cannot look into their brains, it is more productive to ask if similar brain metabolic circuits are being activated. If so, one might think that the mammal or reptile experience is similar or even the same. A cowardly robot that can escape gracefully might have such a survival circuit.

Emotions in people

When we talk about emotions in people, we always talk about feelings because awareness is simply there. That is why it is safe to say that computers and robots cannot sense emotions. Until the Turing test is not passed - and it remains to be seen whether that will happen at some point - computers will remain numb.

But even the mere presence of survival circuits can hardly be equated with emotions in animals. A mouse's fear of seeing a cat is real and can be fatal. In mammals, such fear messages are concentrated in the amygdala, part of the brain.10 The neural circuitry bypasses the cortex because it is so important.

There was also a famous experiment by Le Doux himself with a patient who could not keep new memories in her head, so that the doctor had to re-introduce himself every day. One day he shook hands with her in which he had hidden a small nail. The patient felt pain and the doctor apologized. The next day this patient, who no longer recognized the doctor, did not want to shake hands with him - without knowing why. The memory of the pain from the day before had been stored somewhere in the brain.

Strong emotions can therefore use the "shortcut". The amygdala seems to play a central role in this, since in experiments with rats in which the rat was injured, the rats lost their fear of cats and almost ran into the cats' mouths.

It is precisely these bodily components and neural pathways that produce the specific "color" of emotions. It is therefore also questionable whether a machine that does not know such physical conditions can have emotions - survival circuits or not. You can of course "fake" emotions and that's what everyone who works on such emotional and social robots is talking about.

So are we allowed to fake emotions with computers? The computer has already been compared to a theater stage. The most convincing theater, i.e. the user interface, would therefore be the best. A theatrical computer that can be as good as an actor on command sad or happy would therefore be desirable. But for what?

Do I really want to have software avatars from Internet service providers - or Apple, or Microsoft - who appear as friendly as sellers of used cars to make any offers to customers? Wouldn't we all put on dark glasses and hats and show our best poker face so as not to be ripped off? Will my computer have Google Glass installed to watch me? Switching off the video camera does not help completely, as the microphone signal can also be edited during Skype calls.

Just as the Drosophila fly plays a role everywhere in the work of geneticists in biology, so autistic people and those in need of care play the role of the supposed customers of the emotional robots in a number of projects. If autistic people don't get along with people, the computer might do better. I don't remember how many autistic robotics projects have been started in the last few decades.

But the use of robots that simulate emotions in nursing would be particularly questionable. People should take care of other people. Nothing would be sadder than the friendly and singing nursing robot who would have to explain to the patient that he is doing this job because there is no one else outside who would like to have something to do with them. The use of "emotional" robots in care would be as unethical as engaging trusting robots for funerals so that one can watch football at home.

So, I'm sure we won't have compassionate robots in the long run. Technology is far from delivering something like this. The fact that journalists ask for it is due to today's hype of "digitization". But I also think that it would be unethical to use emotion recognition software to monitor customers and that the supposed advantages of such systems do not offset the disadvantages. I also think that the use of acting robots that only fool us with emotions would also be unethical, especially in hospitals and in nursing. Just because something is technologically feasible, we mustn't do it.

Read comments (88 posts) https://heise.de/-3378295Report an error