238,271 research outputs found

    Seeing fearful body expressions activates the fusiform cortex and amygdala

    Get PDF
    AbstractDarwin's evolutionary approach to organisms' emotional states attributes a prominent role to expressions of emotion in whole-body actions. Researchers in social psychology [1, 2] and human development [3] have long emphasized the fact that emotional states are expressed through body movement, but cognitive neuroscientists have almost exclusively considered isolated facial expressions (for review, see [4]). Here we used high-field fMRI to determine the underlying neural mechanisms of perception of body expression of emotion. Subjects were presented with short blocks of body expressions of fear alternating with short blocks of emotionally neutral meaningful body gestures. All images had internal facial features blurred out to avoid confounds due to a face or facial expression. We show that exposure to body expressions of fear, as opposed to neutral body postures, activates the fusiform gyrus and the amygdala. The fact that these two areas have previously been associated with the processing of faces and facial expressions [5–8] suggests synergies between facial and body-action expressions of emotion. Our findings open a new area of investigation of the role of body expressions of emotion in adaptive behavior as well as the relation between processes of emotion recognition in the face and in the body

    Fusing face and body gesture for machine recognition of emotions

    Full text link
    Research shows that humans are more likely to consider computers to be human-like when those computers understand and display appropriate nonverbal communicative behavior. Most of the existing systems attempting to analyze the human nonverbal behavior focus only on the face; research that aims to integrate gesture as an expression mean has only recently emerged. This paper presents an approach to automatic visual recognition of expressive face and upper body action units (FAUs and BAUs) suitable for use in a vision-based affective multimodal framework. After describing the feature extraction techniques, classification results from three subjects are presented. Firstly, individual classifiers are trained separately with face and body features for classification into FAU and BAU categories. Secondly, the same procedure is applied for classification into labeled emotion categories. Finally, we fuse face and body information for classification into combined emotion categories. In our experiments, the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual face modality. © 2005 IEEE

    Fusing face and body display for Bi-modal emotion recognition: Single frame analysis and multi-frame post integration

    Full text link
    This paper presents an approach to automatic visual emotion recognition from two modalities: expressive face and body gesture. Pace and body movements are captured simultaneously using two separate cameras. For each face and body image sequence single "expressive" frames are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities for mono-modal emotion recognition. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision-level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. We further extend the affect analysis into a whole image sequence by a multi-frame post integration approach over the single frame recognition results. In our experiments, the post integration based on the fusion of face and body has shown to be more accurate than the post integration based on the facial modality only. © Springer-Verlag Berlin Heidelberg 2005

    How Emotional Body Expressions Direct an Infant\u27s First Look

    Get PDF
    Previous research in infant cognitive development has helped psychologists better understand visual looking patterns in infants exposed to various facial expressions and emotions. There has been significantly less research, however, on gaze sequences in relation to emotional body expressions. The aim of this study was to address this gap in the literature by using eye-tracking software to analyze infants’ gaze patterns of different areas of interest (AOIs) on emotional body expressions. Forty 6.5-month-old infants (Mean age in days = 193.9; SD = 8.00; 18 males) were shown four emotional body expressions (happy, sad, angry, fearful) with either a blurred face condition or a present face condition. Each expression was viewed twice by each infant for a total of 8-8 second trials. To examine whether infants’ first fixation location differed across emotion and area of interest (AOI), a mixed analysis of variance was conducted on the number of first fixations to each AOI across emotion with emotion (anger, fear, happy, sad) and AOI (upper body, face/head, legs, arms/hands) as a within-subjects factor and condition (face present, blurred) as a between-participant factor. There was a significant main effect of AOI, F(3, 342) = 36.40, p \u3c .001, h2 = .49. However, this main effect is “explained” by a significant interaction between AOI and emotion, F(9, 342) = 2.07, p = .031, h2 = .05. There was no evidence of difference in performance across conditions, therefore subsequent analyses were collapsed across this variable. Follow-up analyses probing the interaction between AOI and emotion indicate that the number of first looks to the legs and arms/hands AOIs varies across emotion. For example, infants’ first fixation was more often directed towards the arms/hands AOI when the emotion of the body expression was sad. Additionally, infants’ first fixation location was more often directed toward the legs AOI when the body expression was happy. In contrast, there was insufficient evidence to suggest differences across emotion nor AOI when analyzing the time it took infants to make their first fixation or with the duration of the first fixation. In summary, the location of infants’ first fixation on static images of emotional body expressions varied as a function of emotion. Moreover, infants’ performance was not affected by the presence/absence of facial emotional information. These findings suggest that socially relevant features within bodies are differentially attended to by at least 6.5 months of age. This kind of systematic scanning may lay the groundwork for mature knowledge of emotions and appropriate behavioral responses to other people’s emotion later in life

    Typical integration of emotion cues from bodies and faces in Autism Spectrum Disorder

    Get PDF
    Contextual cues derived from body postures bias how typical observers categorize facial emotion; the same facial expression may be perceived as anger or disgust when aligned with angry and disgusted body postures. Individuals with Autism Spectrum Disorder (ASD) are thought to have difficulties integrating information from disparate visual regions to form unitary percepts, and may be less susceptible to visual illusions induced by context. The current study investigated whether individuals with ASD exhibit diminished integration of emotion cues extracted from faces and bodies. Individuals with and without ASD completed a binary expression classification task, categorizing facial emotion as ‘Disgust’ or ‘Anger’. Facial stimuli were drawn from a morph continuum blending facial disgust and anger, and presented in isolation, or accompanied by an angry or disgusted body posture. Participants were explicitly instructed to disregard the body context. Contextual modulation was inferred from a shift in the resulting psychometric functions. Contrary to prediction, observers with ASD showed typical integration of emotion cues from the face and body. Correlation analyses suggested a relationship between the ability to categorize emotion from isolated faces, and susceptibility to contextual influence within the ASD sample; individuals with imprecise facial emotion classification were influenced more by body posture cues

    Affect recognition from face and body: Early fusion vs. late fusion

    Full text link
    This paper presents an approach to automatic visual emotion recognition from two modalities: face and body. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information first at a feature-level, in which the data from both modalities are combined before classification, and later at a decision-level, in which we integrate the outputs of the monomodal systems by the use of suitable criteria. We then evaluate these two fusion approaches, in terms of performance over monomodal emotion recognition based on facial expression modality only. In the experiments performed the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. Moreover, fusion at the feature-level proved better recognition than fusion at the decision-level. © 2005 IEEE

    Superior Facial Expression, But Not Identity Recognition, in Mirror-Touch Synesthesia

    Get PDF
    Simulation models of expression recognition contend that to understand another's facial expressions, individuals map the perceived expression onto the same sensorimotor representations that are active during the experience of the perceived emotion. To investigate this view, the present study examines facial expression and identity recognition abilities in a rare group of participants who show facilitated sensorimotor simulation (mirror-touch synesthetes). Mirror-touch synesthetes experience touch on their own body when observing touch to another person. These experiences have been linked to heightened sensorimotor simulation in the shared-touch network (brain regions active during the passive observation and experience of touch). Mirror-touch synesthetes outperformed nonsynesthetic participants on measures of facial expression recognition, but not on control measures of face memory or facial identity perception. These findings imply a role for sensorimotor simulation processes in the recognition of facial affect, but not facial identity

    Innovative Approach to Detect Mental Disorder Using Multimodal Technique

    Get PDF
    The human can display their emotions through facial expressions. To achieve more effective human- computer interaction, the emotion recognize from human face could prove to be an invaluable tool. In this work the automatic facial recognition system is described with the help of video. The main aim is to focus on detecting the human face from the video and classify the emotions on the basis of facial features .There have been extensive studies of human facial expressions. These facial expressions are representing happiness, sadness, anger, fear, surprise and disgust. It including preliterate ones, and found much commonality in the expression and recognition of emotions on the face. Emotion detection from speech has many important applications. In human-computer based systems, emotion recognition systems provide users with improved services as per their emotions criteria. It is quite limited on body of work on detecting emotion in speech. The developers are still debating what features effect the emotion identification in speech. There is no particularity for the best algorithm for classifying emotion, and which emotions to class together
    corecore