2,216 research outputs found
Facial Expression Analysis under Partial Occlusion: A Survey
Automatic machine-based Facial Expression Analysis (FEA) has made substantial
progress in the past few decades driven by its importance for applications in
psychology, security, health, entertainment and human computer interaction. The
vast majority of completed FEA studies are based on non-occluded faces
collected in a controlled laboratory environment. Automatic expression
recognition tolerant to partial occlusion remains less understood, particularly
in real-world scenarios. In recent years, efforts investigating techniques to
handle partial occlusion for FEA have seen an increase. The context is right
for a comprehensive perspective of these developments and the state of the art
from this perspective. This survey provides such a comprehensive review of
recent advances in dataset creation, algorithm development, and investigations
of the effects of occlusion critical for robust performance in FEA systems. It
outlines existing challenges in overcoming partial occlusion and discusses
possible opportunities in advancing the technology. To the best of our
knowledge, it is the first FEA survey dedicated to occlusion and aimed at
promoting better informed and benchmarked future work.Comment: Authors pre-print of the article accepted for publication in ACM
Computing Surveys (accepted on 02-Nov-2017
Automatic Recognition of Facial Displays of Unfelt Emotions
Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datas
Automatic Recognition of Facial Displays of Unfelt Emotions
Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average, it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datase
Deep Adaptation of Adult-Child Facial Expressions by Fusing Landmark Features
Imaging of facial affects may be used to measure psychophysiological
attributes of children through their adulthood, especially for monitoring
lifelong conditions like Autism Spectrum Disorder. Deep convolutional neural
networks have shown promising results in classifying facial expressions of
adults. However, classifier models trained with adult benchmark data are
unsuitable for learning child expressions due to discrepancies in
psychophysical development. Similarly, models trained with child data perform
poorly in adult expression classification. We propose domain adaptation to
concurrently align distributions of adult and child expressions in a shared
latent space to ensure robust classification of either domain. Furthermore, age
variations in facial images are studied in age-invariant face recognition yet
remain unleveraged in adult-child expression classification. We take
inspiration from multiple fields and propose deep adaptive FACial Expressions
fusing BEtaMix SElected Landmark Features (FACE-BE-SELF) for adult-child facial
expression classification. For the first time in the literature, a mixture of
Beta distributions is used to decompose and select facial features based on
correlations with expression, domain, and identity factors. We evaluate
FACE-BE-SELF on two pairs of adult-child data sets. Our proposed FACE-BE-SELF
approach outperforms adult-child transfer learning and other baseline domain
adaptation methods in aligning latent representations of adult and child
expressions
- …