210,579 research outputs found
The role of spatial frequency information for ERP components sensitive to faces and emotional facial expression
To investigate the impact of spatial frequency on emotional facial expression analysis, ERPs were recorded in response to low spatial frequency (LSF), high spatial frequency (HSF), and unfiltered broad spatial frequency (BSF) faces with fearful or neutral expressions, houses, and chairs. In line with previous findings, BSF fearful facial expressions elicited a greater frontal positivity than BSF neutral facial expressions, starting at about 150 ms after stimulus onset. In contrast, this emotional expression effect was absent for HSF and LSF faces. Given that some brain regions involved in emotion processing, such as amygdala and connected structures, are selectively tuned to LSF visual inputs, these data suggest that ERP effects of emotional facial expression do not directly reflect activity in these regions. It is argued that higher order neocortical brain systems are involved in the generation of emotion-specific waveform modulations. The face-sensitive N170 component was neither affected by emotional facial expression nor by spatial frequency information
Reproducibility of the dynamics of facial expressions in unilateral facial palsy
The aim of this study was to assess the reproducibility of non-verbal facial
expressions in unilateral facial paralysis using dynamic four-dimensional (4D)
imaging. The Di4D system was used to record five facial expressions of 20 adult
patients. The system captured 60 three-dimensional (3D) images per second; each
facial expression took 3–4 seconds which was recorded in real time. Thus a set of
180 3D facial images was generated for each expression. The procedure was
repeated after 30 min to assess the reproducibility of the expressions. A
mathematical facial mesh consisting of thousands of quasi-point ‘vertices’ was
conformed to the face in order to determine the morphological characteristics in a
comprehensive manner. The vertices were tracked throughout the sequence of the
180 images. Five key 3D facial frames from each sequence of images were
analyzed. Comparisons were made between the first and second capture of each
facial expression to assess the reproducibility of facial movements. Corresponding
images were aligned using partial Procrustes analysis, and the root mean square
distance between them was calculated and analyzed statistically (paired Student ttest,
P < 0.05). Facial expressions of lip purse, cheek puff, and raising of eyebrows
were reproducible. Facial expressions of maximum smile and forceful eye closure
were not reproducible. The limited coordination of various groups of facial muscles
contributed to the lack of reproducibility of these facial expressions. 4D imaging is a
useful clinical tool for the assessment of facial expressions
Improving Facial Analysis and Performance Driven Animation through Disentangling Identity and Expression
We present techniques for improving performance driven facial animation,
emotion recognition, and facial key-point or landmark prediction using learned
identity invariant representations. Established approaches to these problems
can work well if sufficient examples and labels for a particular identity are
available and factors of variation are highly controlled. However, labeled
examples of facial expressions, emotions and key-points for new individuals are
difficult and costly to obtain. In this paper we improve the ability of
techniques to generalize to new and unseen individuals by explicitly modeling
previously seen variations related to identity and expression. We use a
weakly-supervised approach in which identity labels are used to learn the
different factors of variation linked to identity separately from factors
related to expression. We show how probabilistic modeling of these sources of
variation allows one to learn identity-invariant representations for
expressions which can then be used to identity-normalize various procedures for
facial expression analysis and animation control. We also show how to extend
the widely used techniques of active appearance models and constrained local
models through replacing the underlying point distribution models which are
typically constructed using principal component analysis with
identity-expression factorized representations. We present a wide variety of
experiments in which we consistently improve performance on emotion
recognition, markerless performance-driven facial animation and facial
key-point tracking.Comment: to appear in Image and Vision Computing Journal (IMAVIS
VGAN-Based Image Representation Learning for Privacy-Preserving Facial Expression Recognition
Reliable facial expression recognition plays a critical role in human-machine
interactions. However, most of the facial expression analysis methodologies
proposed to date pay little or no attention to the protection of a user's
privacy. In this paper, we propose a Privacy-Preserving Representation-Learning
Variational Generative Adversarial Network (PPRL-VGAN) to learn an image
representation that is explicitly disentangled from the identity information.
At the same time, this representation is discriminative from the standpoint of
facial expression recognition and generative as it allows expression-equivalent
face image synthesis. We evaluate the proposed model on two public datasets
under various threat scenarios. Quantitative and qualitative results
demonstrate that our approach strikes a balance between the preservation of
privacy and data utility. We further demonstrate that our model can be
effectively applied to other tasks such as expression morphing and image
completion
- …