35,322 research outputs found
Distinguishing Posed and Spontaneous Smiles by Facial Dynamics
Smile is one of the key elements in identifying emotions and present state of
mind of an individual. In this work, we propose a cluster of approaches to
classify posed and spontaneous smiles using deep convolutional neural network
(CNN) face features, local phase quantization (LPQ), dense optical flow and
histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for
micro-expression smile amplification along with three normalization procedures
for distinguishing posed and spontaneous smiles. Although the deep CNN face
model is trained with large number of face images, HOG features outperforms
this model for overall face smile classification task. Using EVM to amplify
micro-expressions did not have a significant impact on classification accuracy,
while the normalizing facial features improved classification accuracy. Unlike
many manual or semi-automatic methodologies, our approach aims to automatically
classify all smiles into either `spontaneous' or `posed' categories, by using
support vector machines (SVM). Experimental results on large UvA-NEMO smile
database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial
Behavior Analysi
Recommended from our members
Contrasting Experimentally Device-Manipulated and Device-Free Smiles.
Researchers in psychology have long been interested in not only studying smiles, but in examining the downstream effects of experimentally manipulated smiles. To experimentally manipulate smiles unobtrusively, participants typically hold devices (e.g., pens or chopsticks) in their mouths in a manner that activates the muscles involved in smiling. Surprisingly, despite decades of research using these methods, no study has tested to what degree these methods activate the same muscles as more natural, device-free smiles. Our study fills this gap in the literature by contrasting the magnitude of muscle activation in device-free smiles against the popular chopstick/pen manipulation. We also contrast these methods against the Smile Stick, a new device specifically designed to manipulate smiles in a comfortable and hygienic fashion. One hundred fifty-nine participants each participated in three facial expression manipulations that were held for 1 min: smile manipulation via Smile Stick, smile manipulation via chopsticks, and device-free smile. Facial electromyography was used to measure the intensity of the activation of the two main types of muscles involved in genuine, Duchenne smiling: the orbicularis oculi (a muscle group around the eyes) and the zygomaticus major (a muscle group in the cheeks). Furthermore, following each manipulation, participants rated their experience of the manipulation (i.e., comfort, fatigue, and difficulty), experienced affect (positive and negative), and levels of arousal. Results indicated that the Smile Stick and chopsticks performed equally across all measurements. Device-free smiles were rated as most comfortable but also the most fatiguing, and procured the greatest levels of positive affect and lowest levels of negative affect. Furthermore, device-free smiles resulted in significantly higher levels of both zygomaticus major (by ∼40%) and orbicularis oculi (by ∼15%) muscle activation than either the Smile Stick or chopsticks. The two devices were not different from each other in muscle activation. This study reveals that while device-free smiling procures the greatest changes in muscle activation and affect change, smiling muscle groups are activated by device manipulations, and expected changes in affect do occur, albeit to a lesser degree than device-free smiling. It also indicates that the Smile Stick is an acceptable and comparable alternative to disposable chopsticks
Less is More: Micro-expression Recognition from Video using Apex Frame
Despite recent interest and advances in facial micro-expression research,
there is still plenty room for improvement in terms of micro-expression
recognition. Conventional feature extraction approaches for micro-expression
video consider either the whole video sequence or a part of it, for
representation. However, with the high-speed video capture of micro-expressions
(100-200 fps), are all frames necessary to provide a sufficiently meaningful
representation? Is the luxury of data a bane to accurate recognition? A novel
proposition is presented in this paper, whereby we utilize only two images per
video: the apex frame and the onset frame. The apex frame of a video contains
the highest intensity of expression changes among all frames, while the onset
is the perfect choice of a reference frame with neutral expression. A new
feature extractor, Bi-Weighted Oriented Optical Flow (Bi-WOOF) is proposed to
encode essential expressiveness of the apex frame. We evaluated the proposed
method on five micro-expression databases: CAS(ME), CASME II, SMIC-HS,
SMIC-NIR and SMIC-VIS. Our experiments lend credence to our hypothesis, with
our proposed technique achieving a state-of-the-art F1-score recognition
performance of 61% and 62% in the high frame rate CASME II and SMIC-HS
databases respectively.Comment: 14 pages double-column, author affiliations updated, acknowledgment
of grant support adde
Machine Analysis of Facial Expressions
No abstract
- …