14,813 research outputs found
Associations between Feeling and Judging the Emotions of Happiness and Fear: Findings from a Large-Scale Field Experiment
Background:
How do we recognize emotions from other people? One possibility is that our own emotional experiences guide us in the online recognition of emotion in others. A distinct but related possibility is that emotion experience helps us to learn how to recognize emotions in childhood.
Methodology/Principal Findings:
We explored these ideas in a large sample of people (N = 4,608) ranging from 5 to over 50 years old. Participants were asked to rate the intensity of emotional experience in their own lives, as well as to perform a task of facial emotion recognition. Those who reported more intense experience of fear and happiness were significantly more accurate (closer to prototypical) in recognizing facial expressions of fear and happiness, respectively, and intense experience of fear was associated also with more accurate recognition of surprised and happy facial expressions. The associations held across all age groups.
Conclusions:
These results suggest that the intensity of one's own emotional experience of fear and happiness correlates with the ability to recognize these emotions in others, and demonstrate such an association as early as age 5
Recommended from our members
Visual search for basic emotional expressions in autism: impaired processing of anger, fear and sadness, but a typical happy face advantage
Facial expression recognition was investigated in 20 males with high functioning autism (HFA) or Asperger syndrome (AS), compared to typically developing individuals matched for chronological age (TD CA group) and verbal and non-verbal ability (TD V/NV group). This was the first study to employ a visual search, "face in the crowd" paradigm with a HFA/AS group, which explored responses to numerous facial expressions using real-face stimuli. Results showed slower response times for processing fear, anger and sad expressions in the HFA/AS group, relative to the TD CA group, but not the TD V/NV group. Reponses to happy, disgust and surprise expressions showed no group differences. Results are discussed with reference to the amygdala theory of autism. © 2011 Elsevier Ltd
LOMo: Latent Ordinal Model for Facial Analysis in Videos
We study the problem of facial analysis in videos. We propose a novel weakly
supervised learning method that models the video event (expression, pain etc.)
as a sequence of automatically mined, discriminative sub-events (eg. onset and
offset phase for smile, brow lower and cheek raise for pain). The proposed
model is inspired by the recent works on Multiple Instance Learning and latent
SVM/HCRF- it extends such frameworks to model the ordinal or temporal aspect in
the videos, approximately. We obtain consistent improvements over relevant
competitive baselines on four challenging and publicly available video based
facial analysis datasets for prediction of expression, clinical pain and intent
in dyadic conversations. In combination with complimentary features, we report
state-of-the-art results on these datasets.Comment: 2016 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR
Neural response to specific components of fearful faces in healthy and schizophrenic adults
Perception of fearful faces is associated with functional activation of cortico-limbic structures, which has been found altered in individuals with psychiatric disorders such as schizophrenia, autism and major depression. The objective of this study was to isolate the brain response to the features of standardized fearful faces by incorporating principal component analysis (PCA) into the analysis of neuroimaging data of healthy volunteers and individuals with schizophrenia. At the first stage, the visual characteristics of morphed fearful facial expressions (FEEST, Young et al., 2002) were classified with PCA, which produced seven orthogonal factors, with some of them related to emotionally salient facial features (eyes, mouth, brows) and others reflecting non-salient facial features. Subsequently, these PCA-based factors were included into the functional magnetic resonance imaging (fMRI) analysis of 63 healthy volunteers and 32 individuals with schizophrenia performing a task that involved implicit processing of FEEST stimuli. In healthy volunteers, significant neural response was found to visual characteristics of eyes, mouth or brows. In individuals with schizophrenia, PCA-based analysis enabled us to identify several significant clusters of activation that were not detected by the standard approach. These clusters were implicated in processing of visual and emotional information and were attributable to the perception of eyes and brows. PCA-based analysis could be useful in isolating brain response to salient facial features in psychiatric populations
Speech-driven Animation with Meaningful Behaviors
Conversational agents (CAs) play an important role in human computer
interaction. Creating believable movements for CAs is challenging, since the
movements have to be meaningful and natural, reflecting the coupling between
gestures and speech. Studies in the past have mainly relied on rule-based or
data-driven approaches. Rule-based methods focus on creating meaningful
behaviors conveying the underlying message, but the gestures cannot be easily
synchronized with speech. Data-driven approaches, especially speech-driven
models, can capture the relationship between speech and gestures. However, they
create behaviors disregarding the meaning of the message. This study proposes
to bridge the gap between these two approaches overcoming their limitations.
The approach builds a dynamic Bayesian network (DBN), where a discrete variable
is added to constrain the behaviors on the underlying constraint. The study
implements and evaluates the approach with two constraints: discourse functions
and prototypical behaviors. By constraining on the discourse functions (e.g.,
questions), the model learns the characteristic behaviors associated with a
given discourse class learning the rules from the data. By constraining on
prototypical behaviors (e.g., head nods), the approach can be embedded in a
rule-based system as a behavior realizer creating trajectories that are timely
synchronized with speech. The study proposes a DBN structure and a training
approach that (1) models the cause-effect relationship between the constraint
and the gestures, (2) initializes the state configuration models increasing the
range of the generated behaviors, and (3) captures the differences in the
behaviors across constraints by enforcing sparse transitions between shared and
exclusive states per constraint. Objective and subjective evaluations
demonstrate the benefits of the proposed approach over an unconstrained model.Comment: 13 pages, 12 figures, 5 table
Discriminatively Trained Latent Ordinal Model for Video Classification
We study the problem of video classification for facial analysis and human
action recognition. We propose a novel weakly supervised learning method that
models the video as a sequence of automatically mined, discriminative
sub-events (eg. onset and offset phase for "smile", running and jumping for
"highjump"). The proposed model is inspired by the recent works on Multiple
Instance Learning and latent SVM/HCRF -- it extends such frameworks to model
the ordinal aspect in the videos, approximately. We obtain consistent
improvements over relevant competitive baselines on four challenging and
publicly available video based facial analysis datasets for prediction of
expression, clinical pain and intent in dyadic conversations and on three
challenging human action datasets. We also validate the method with qualitative
results and show that they largely support the intuitions behind the method.Comment: Paper accepted in IEEE TPAMI. arXiv admin note: substantial text
overlap with arXiv:1604.0150
EMPATH: A Neural Network that Categorizes Facial Expressions
There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of "categorical perception." In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, "surprise" expressions lie between "happiness" and "fear" expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain
Recognising the ageing face: the role of age in face processing
The effects of age-induced changes on face recognition were investigated as a means of exploring the role of age in the encoding of new facial memories. The ability of participants to recognise each of six previously learnt faces was tested with versions which were either identical to the learnt faces, the same age (but different in pose and expression), or younger or older in age. Participants were able to cope well with facial changes induced by ageing: their performance with older, but not younger, versions was comparable to that with faces which differed only in pose and expression. Since the large majority of different age versions were recognised successfully, it can be concluded that the process of recognition does not require an exact match in age characteristics between the stored representation of a face and the face currently in view. As the age-related changes explored here were those that occur during the period of growth, this in turn implies that the underlying structural physical properties of the face are (in addition to pose and facial expression) invariant to a certain extent
- …
