4,618 research outputs found
Distinguishing Posed and Spontaneous Smiles by Facial Dynamics
Smile is one of the key elements in identifying emotions and present state of
mind of an individual. In this work, we propose a cluster of approaches to
classify posed and spontaneous smiles using deep convolutional neural network
(CNN) face features, local phase quantization (LPQ), dense optical flow and
histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for
micro-expression smile amplification along with three normalization procedures
for distinguishing posed and spontaneous smiles. Although the deep CNN face
model is trained with large number of face images, HOG features outperforms
this model for overall face smile classification task. Using EVM to amplify
micro-expressions did not have a significant impact on classification accuracy,
while the normalizing facial features improved classification accuracy. Unlike
many manual or semi-automatic methodologies, our approach aims to automatically
classify all smiles into either `spontaneous' or `posed' categories, by using
support vector machines (SVM). Experimental results on large UvA-NEMO smile
database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial
Behavior Analysi
A dynamic texture based approach to recognition of facial actions and their temporal models
In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set
Spontaneous Subtle Expression Detection and Recognition based on Facial Strain
Optical strain is an extension of optical flow that is capable of quantifying
subtle changes on faces and representing the minute facial motion intensities
at the pixel level. This is computationally essential for the relatively new
field of spontaneous micro-expression, where subtle expressions can be
technically challenging to pinpoint. In this paper, we present a novel method
for detecting and recognizing micro-expressions by utilizing facial optical
strain magnitudes to construct optical strain features and optical strain
weighted features. The two sets of features are then concatenated to form the
resultant feature histogram. Experiments were performed on the CASME II and
SMIC databases. We demonstrate on both databases, the usefulness of optical
strain information and more importantly, that our best approaches are able to
outperform the original baseline results for both detection and recognition
tasks. A comparison of the proposed method with other existing spatio-temporal
feature extraction approaches is also presented.Comment: 21 pages (including references), single column format, accepted to
Signal Processing: Image Communication journa
Relative Facial Action Unit Detection
This paper presents a subject-independent facial action unit (AU) detection
method by introducing the concept of relative AU detection, for scenarios where
the neutral face is not provided. We propose a new classification objective
function which analyzes the temporal neighborhood of the current frame to
decide if the expression recently increased, decreased or showed no change.
This approach is a significant change from the conventional absolute method
which decides about AU classification using the current frame, without an
explicit comparison with its neighboring frames. Our proposed method improves
robustness to individual differences such as face scale and shape, age-related
wrinkles, and transitions among expressions (e.g., lower intensity of
expressions). Our experiments on three publicly available datasets (Extended
Cohn-Kanade (CK+), Bosphorus, and DISFA databases) show significant improvement
of our approach over conventional absolute techniques. Keywords: facial action
coding system (FACS); relative facial action unit detection; temporal
information;Comment: Accepted at IEEE Winter Conference on Applications of Computer
Vision, Steamboat Springs Colorado, USA, 201
Less is More: Micro-expression Recognition from Video using Apex Frame
Despite recent interest and advances in facial micro-expression research,
there is still plenty room for improvement in terms of micro-expression
recognition. Conventional feature extraction approaches for micro-expression
video consider either the whole video sequence or a part of it, for
representation. However, with the high-speed video capture of micro-expressions
(100-200 fps), are all frames necessary to provide a sufficiently meaningful
representation? Is the luxury of data a bane to accurate recognition? A novel
proposition is presented in this paper, whereby we utilize only two images per
video: the apex frame and the onset frame. The apex frame of a video contains
the highest intensity of expression changes among all frames, while the onset
is the perfect choice of a reference frame with neutral expression. A new
feature extractor, Bi-Weighted Oriented Optical Flow (Bi-WOOF) is proposed to
encode essential expressiveness of the apex frame. We evaluated the proposed
method on five micro-expression databases: CAS(ME), CASME II, SMIC-HS,
SMIC-NIR and SMIC-VIS. Our experiments lend credence to our hypothesis, with
our proposed technique achieving a state-of-the-art F1-score recognition
performance of 61% and 62% in the high frame rate CASME II and SMIC-HS
databases respectively.Comment: 14 pages double-column, author affiliations updated, acknowledgment
of grant support adde
Learning Temporal Alignment Uncertainty for Efficient Event Detection
In this paper we tackle the problem of efficient video event detection. We
argue that linear detection functions should be preferred in this regard due to
their scalability and efficiency during estimation and evaluation. A popular
approach in this regard is to represent a sequence using a bag of words (BOW)
representation due to its: (i) fixed dimensionality irrespective of the
sequence length, and (ii) its ability to compactly model the statistics in the
sequence. A drawback to the BOW representation, however, is the intrinsic
destruction of the temporal ordering information. In this paper we propose a
new representation that leverages the uncertainty in relative temporal
alignments between pairs of sequences while not destroying temporal ordering.
Our representation, like BOW, is of a fixed dimensionality making it easily
integrated with a linear detection function. Extensive experiments on CK+,
6DMG, and UvA-NEMO databases show significant performance improvements across
both isolated and continuous event detection tasks.Comment: Appeared in DICTA 2015, 8 page
- …