65,739 research outputs found

    The Simulation of Smiles (SIMS) model: Embodied simulation and the meaning of facial expression

    Get PDF
    Recent application of theories of embodied or grounded cognition to the recognition and interpretation of facial expression of emotion has led to an explosion of research in psychology and the neurosciences. However, despite the accelerating number of reported findings, it remains unclear how the many component processes of emotion and their neural mechanisms actually support embodied simulation. Equally unclear is what triggers the use of embodied simulation versus perceptual or conceptual strategies in determining meaning. The present article integrates behavioral research from social psychology with recent research in neurosciences in order to provide coherence to the extant and future research on this topic. The roles of several of the brain's reward systems, and the amygdala, somatosensory cortices, and motor centers are examined. These are then linked to behavioral and brain research on facial mimicry and eye gaze. Articulation of the mediators and moderators of facial mimicry and gaze are particularly useful in guiding interpretation of relevant findings from neurosciences. Finally, a model of the processing of the smile, the most complex of the facial expressions, is presented as a means to illustrate how to advance the application of theories of embodied cognition in the study of facial expression of emotion.Peer Reviewe

    An EEG-Based Multi-Modal Emotion Database With Both Posed And Authentic Facial Actions For Emotion Analysis

    Get PDF
    Emotion is an experience associated with a particular pattern of physiological activity along with different physiological, behavioral and cognitive changes. One behavioral change is facial expression, which has been studied extensively over the past few decades. Facial behavior varies with a person\u27s emotion according to differences in terms of culture, personality, age, context, and environment. In recent years, physiological activities have been used to study emotional responses. A typical signal is the electroencephalogram (EEG), which measures brain activity. Most of existing EEG-based emotion analysis has overlooked the role of facial expression changes. There exits little research on the relationship between facial behavior and brain signals due to the lack of dataset measuring both EEG and facial action signals simultaneously. To address this problem, we propose to develop a new database by collecting facial expressions, action units, and EEGs simultaneously. We recorded the EEGs and face videos of both posed facial actions and spontaneous expressions from 29 participants with different ages, genders, ethnic backgrounds. Differing from existing approaches, we designed a protocol to capture the EEG signals by evoking participants\u27 individual action units explicitly. We also investigated the relation between the EEG signals and facial action units. As a baseline, the database has been evaluated through the experiments on both posed and spontaneous emotion recognition with images alone, EEG alone, and EEG fused with images, respectively. The database will be released to the research community to advance the state of the art for automatic emotion recognition

    The effect of facial expression and identity information on the processing of own and other race faces

    Get PDF
    The central aim of the current thesis was to examine how facial expression and racial identity information affect face processing involving different races, and this was addressed by studying several types of face processing tasks including face recognition, emotion perception/recognition, face perception and attention to faces. In particular, the effect of facial expression on the differential processing of own and other race faces (the so-called the own-race bias) was examined from two perspectives, examining the effect both at the level of perceptual expertise favouring the processing of own-race faces and in-group bias influencing face processing in terms of a self-enhancing dimension. Results from the face recognition study indicated a possible similarity between familiar/unfamiliar and own-race/other-race face processing. Studies on facial expression perception and memory showed that there was no indication of in-group bias in face perception and memory, although a common finding throughout was that different race faces were often associated with different types of facial expressions. The most consistent finding across all studies was that the effect of the own-race bias was more evident amongst European participants. Finally, results from the face attention study showed that there were no signs of preferential visual attention to own-race faces. The results from the current research provided further evidence to the growing body of knowledge regarding the effects of the own-race bias. Based on this knowledge, for future studies it is suggested that a better understanding of the mechanisms underlying the own-race bias would help advance this interesting and ever-evolving area of research further.University of Stirling PhD studentshi

    How Children with Autism Spectrum Disorder Recognize Facial Expressions Displayed by a Rear-Projection Humanoid Robot

    Get PDF
    Background: Children with Autism Spectrum Disorder (ASD) experience reduced ability to perceive crucial nonverbal communication cues such as eye gaze, gestures, and facial expressions. Recent studies suggest that social robots can be used as effective tools to improve communication and social skills in children with ASD. One explanation has been put forward by several studies that children with ASD feel more contented and motivated in systemized and predictable environment, like interacting with robots. Objectives: There have been few research studies evaluating how children with ASD perceive facial expression in humanoid robots but no research evaluating facial expression perception on a rear-projected (aka animation-based) facially-expressive humanoid robot, which provide more life-like expressions. This study evaluates how children with high functioning autism (HFA) differ from their typically developing (TD) peers in recognition of facial expressions demonstrated by a life-like rear-projected humanoid robot, which is more adjustable and flexible in terms of displaying facial expressions for further studies. Methods: Seven HFA and seven TD children and adolescents aged 7-16 participated in this study. The study uses Ryan, a rear-projection, life-like humanoid robot. Six basic emotional facial expressions (happy, sad, angry, disgust, surprised and fear) with four different intensities (25%, 50%, 75% and 100% in ascending order) were shown on Ryan’s face. Participants were asked to choose the expression they perceived among seven options (six basic emotions and none). Responses were recorded by a research assistant. Results were analyzed to obtain the accuracy of facial expression recognition in ASD and TD children on humanoid robot face. Results: We evaluated the intensity of expression in which participants required to reach the peak accuracy. They were best for happy and angry expressions in which the peak accuracy of 100% was reached with at least 50% of expression intensity. The same peak accuracy was reached for surprised and sad expressions in the intensity of 75% and 100%, respectively. But fear and disgust recognition accuracy never reached above 75%, even in the maximum intensity. The experiment is still in progress for TD children. Results will be compared to a TD sample and implication for intervention and clinical work will be discussed. Conclusions: Overall, these results show that children with ASD recognize negative expressions such as fear and disgust with a slightly lower accuracy than other expressions. On the other hand, during the test, children showed engagement and excitement toward the robot. Besides, most of the expressions were sufficiently recognizable for children in higher intensities, which means, Ryan, a rear projected life-like robot could be able to successfully communicate with children in terms of facial expression, though more investigations and improvements should be done. These results serve as a basis to advance the promising field of socially assistive robotics for autism therapy

    The effect of facial expression and identity information on the processing of own and other race faces

    Get PDF
    The central aim of the current thesis was to examine how facial expression and racial identity information affect face processing involving different races, and this was addressed by studying several types of face processing tasks including face recognition, emotion perception/recognition, face perception and attention to faces. In particular, the effect of facial expression on the differential processing of own and other race faces (the so-called the own-race bias) was examined from two perspectives, examining the effect both at the level of perceptual expertise favouring the processing of own-race faces and in-group bias influencing face processing in terms of a self-enhancing dimension. Results from the face recognition study indicated a possible similarity between familiar/unfamiliar and own-race/other-race face processing. Studies on facial expression perception and memory showed that there was no indication of in-group bias in face perception and memory, although a common finding throughout was that different race faces were often associated with different types of facial expressions. The most consistent finding across all studies was that the effect of the own-race bias was more evident amongst European participants. Finally, results from the face attention study showed that there were no signs of preferential visual attention to own-race faces. The results from the current research provided further evidence to the growing body of knowledge regarding the effects of the own-race bias. Based on this knowledge, for future studies it is suggested that a better understanding of the mechanisms underlying the own-race bias would help advance this interesting and ever-evolving area of research further.EThOS - Electronic Theses Online ServiceUniversity of StirlingGBUnited Kingdo

    Automatic Measurement of Affect in Dimensional and Continuous Spaces: Why, What, and How?

    Get PDF
    This paper aims to give a brief overview of the current state-of-the-art in automatic measurement of affect signals in dimensional and continuous spaces (a continuous scale from -1 to +1) by seeking answers to the following questions: i) why has the field shifted towards dimensional and continuous interpretations of affective displays recorded in real-world settings? ii) what are the affect dimensions used, and the affect signals measured? and iii) how has the current automatic measurement technology been developed, and how can we advance the field

    First impressions: A survey on vision-based apparent personality trait analysis

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft

    3D Face Tracking and Texture Fusion in the Wild

    Full text link
    We present a fully automatic approach to real-time 3D face reconstruction from monocular in-the-wild videos. With the use of a cascaded-regressor based face tracking and a 3D Morphable Face Model shape fitting, we obtain a semi-dense 3D face shape. We further use the texture information from multiple frames to build a holistic 3D face representation from the video frames. Our system is able to capture facial expressions and does not require any person-specific training. We demonstrate the robustness of our approach on the challenging 300 Videos in the Wild (300-VW) dataset. Our real-time fitting framework is available as an open source library at http://4dface.org

    Discovering cultural differences (and similarities) in facial expressions of emotion

    Get PDF
    Understanding the cultural commonalities and specificities of facial expressions of emotion remains a central goal of Psychology. However, recent progress has been stayed by dichotomous debates (e.g., nature versus nurture) that have created silos of empirical and theoretical knowledge. Now, an emerging interdisciplinary scientific culture is broadening the focus of research to provide a more unified and refined account of facial expressions within and across cultures. Specifically, data-driven approaches allow a wider, more objective exploration of face movement patterns that provide detailed information ontologies of their cultural commonalities and specificities. Similarly, a wider exploration of the social messages perceived from face movements diversifies knowledge of their functional roles (e.g., the ‘fear’ face used as a threat display). Together, these new approaches promise to diversify, deepen, and refine knowledge of facial expressions, and deliver the next major milestones for a functional theory of human social communication that is transferable to social robotics
    • …
    corecore