8,062 research outputs found
Distinguishing Posed and Spontaneous Smiles by Facial Dynamics
Smile is one of the key elements in identifying emotions and present state of
mind of an individual. In this work, we propose a cluster of approaches to
classify posed and spontaneous smiles using deep convolutional neural network
(CNN) face features, local phase quantization (LPQ), dense optical flow and
histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for
micro-expression smile amplification along with three normalization procedures
for distinguishing posed and spontaneous smiles. Although the deep CNN face
model is trained with large number of face images, HOG features outperforms
this model for overall face smile classification task. Using EVM to amplify
micro-expressions did not have a significant impact on classification accuracy,
while the normalizing facial features improved classification accuracy. Unlike
many manual or semi-automatic methodologies, our approach aims to automatically
classify all smiles into either `spontaneous' or `posed' categories, by using
support vector machines (SVM). Experimental results on large UvA-NEMO smile
database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial
Behavior Analysi
Machine Analysis of Facial Expressions
No abstract
Less is More: Facial Landmarks can Recognize a Spontaneous Smile
Smile veracity classification is a task of interpreting social interactions.
Broadly, it distinguishes between spontaneous and posed smiles. Previous
approaches used hand-engineered features from facial landmarks or considered
raw smile videos in an end-to-end manner to perform smile classification tasks.
Feature-based methods require intervention from human experts on feature
engineering and heavy pre-processing steps. On the contrary, raw smile video
inputs fed into end-to-end models bring more automation to the process with the
cost of considering many redundant facial features (beyond landmark locations)
that are mainly irrelevant to smile veracity classification. It remains unclear
to establish discriminative features from landmarks in an end-to-end manner. We
present a MeshSmileNet framework, a transformer architecture, to address the
above limitations. To eliminate redundant facial features, our landmarks input
is extracted from Attention Mesh, a pre-trained landmark detector. Again, to
discover discriminative features, we consider the relativity and trajectory of
the landmarks. For the relativity, we aggregate facial landmark that
conceptually formats a curve at each frame to establish local spatial features.
For the trajectory, we estimate the movements of landmark composed features
across time by self-attention mechanism, which captures pairwise dependency on
the trajectory of the same landmark. This idea allows us to achieve
state-of-the-art performances on UVA-NEMO, BBC, MMI Facial Expression, and SPOS
datasets
Recognition of Posed and Spontaneous Dynamic Smiles in Younger and Older Adults
In two studies, we investigated age effects in the ability to recognize dynamic posed and spontaneous smiles. Study 1 found that both younger and older adult participants were above-chance in their ability to distinguish between posed and spontaneous younger adult smiles. Study 2 found that younger adult participant performance declined when judging a combination of both younger and older adult target smiles, while older adult participants outperformed younger adult participants in distinguishing between posed and spontaneous smiles. A synthesis of results across the two studies showed a small-to-medium age effect (d = −0.40) suggesting an older adult advantage when discriminating between smile types. Mixed stimuli (i.e., a mixture of younger and older adult faces) may impact accurate smile discrimination. Future research should investigate both the sources (cues, etc.) and behavioral effects of age-related differences in the discrimination of positive expressions
Acquisition in the course of conversation
published or submitted for publicationis peer reviewe
Observers’ Pupillary Responses in Recognising Real and Posed Smiles: A Preliminary Study
Pupillary responses (PR) change differently for different types of stimuli. This study aims to check whether observers’ PR can recognise real and posed smiles from a set of smile images and videos. We showed the smile images and smile videos stimuli to observers, and recorded their pupillary responses considering four different situations, namely paired videos, paired images, single videos, and single images. When the same smiler was viewed by observers in both real and posed smile forms, we refer them as “paired”; otherwise we use the term “single”. The primary analysis on pupil data revealed that the differences of pupillary response between real and posed smiles are more significant in case of paired videos compared to others. This result is found from timeline analysis, KS-test, and ANOVA test. Overall, our model can recognise real and posed smiles from observers’ pupillary responses instead of smilers’ responses. Our research will be applicable in affective computing and computerhuman interaction for measuring emotional authenticity
- …