Duration: 5 mins 39 secs
About this item
|Description:||Facial expressions provide an important spontaneous channel for the communication of both emotional and social displays. This video shows how facial expression information can be used to make useful inferences about a user’s mental state in a natural computing environment.|
|Collection:||Rainbow Graphics and Interaction Research Group|
|Publisher:||University of Cambridge|
|Copyright:||Professor Peter Robinson|
|Keywords:||affective; computing; emotion; facial; expression;|
|Abstract:||Can you read minds? The answer is most likely ‘yes’. You may not consider it mind reading but our ability to understand what people are thinking and feeling from their facial expressions and gestures is just that. People express their mental states all the time through facial expressions, vocal nuances and gestures. We have built this ability into computers to make them emotionally aware.
The ability to attribute mental states to others from their behaviour and then to use that information to guide our own actions or predict those of others is known as the ‘theory of mind’. Although research on this theory has been around since the 1970s, it has recently gained attention due to the growing number of people with Autism conditions, who are thought to be ‘mind-blind’. That is, they have difficulty interpreting others' emotions and feelings from facial expressions and other non-verbal cues.
Our computer system is based on the latest research in the theory of mind by Professor Simon Baron-Cohen, Director of the Autism Research Centre at Cambridge. His research provides a taxonomy of facial expressions and the emotions they represent. In 2004, his group published the Mind Reading DVD, an interactive computer-based guide to reading emotions from the face and voice. The DVD contains videos of people showing 412 different mental states. We have developed computer programs that can read facial expressions using machine vision, and then infer emotions using probabilistic machine learning trained by examples from the DVD.
Machine vision is getting machines to ‘see’, giving them the ability to extract, analyze and make sense of information from images or video, in this case footage of facial expressions. Probabilistic machine learning describes the mechanism of enabling a machine to learn an association between features of an image such as facial expression and other classes of information, in this case emotions, from training examples. The most likely interpretation of the facial expressions is then computed using probability theory.