One of the most difficult things in programing robots is getting them to understand what emotions are and how they should react to them. While getting them to experience them for themselves is probably still a very long way off, getting to them react in reasonably appropriate ways may not be. This is according to a group of researchers working at Malaysia’s Manipal International University. They have as they describe in their paper published in Inderscience, discovered a way to clue a computer system into a person’s emotional state by reading their lips. Science Spot highlights the group’s work, saying that the system of computer and cameras could result in a whole new way for robot’s and human beings to communicate.
The idea, the team explains, is to categorize the different ways a person’s lips are moved when feeling an emotion, particularly the less than obvious clues such as a hang dog pout. They say that human beings have the uncanny ability to express themselves through the shaping of their lips to express literally hundreds of emotions. Fortunately though, they add, most can be boiled down to fit into just six commonly accepted human emotions: happy, sad, fearful, angry, disgusted, surprised and of course, neutral.
To get a computer or robot to interpret what it sees as a class of emotion, Science Spot says, the team had to nail down which lip movements represented which emotion and then build a pattern around it that a computer could recognize. The researchers did just that for each of the six emotion types then added some fudging for those that appeared to lie between two others. But, that wasn’t enough of course so the team then built in a learning mode that allowed a person looking at the camera to actually tell the computer which emotion they were experiencing so that the computer could gain firsthand knowledge. To make sure it was complete the team sat people from many nationalities in front of the computer while they watched a movie that would elicit the whole range of emotions, with markers at the points that identified which could be expected at which point. The computer then displayed which emotion it thought the person was experiencing and the person responded with a yes or no and if it was a no, supplied the correct emotion.
After training the computer to read, identify and respond to both still and moving pictures and to real life human beings, the team reports that they were able to reduce the error rate of their program to just two percent, which they note, is actually a lot better than people are at reading one another’s emotions.