20 research outputs found

    Objective Classes for Micro-Facial Expression Recognition

    Full text link
    Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP, HOOF and HOG 3D feature descriptors. The experiments are evaluated on two benchmark FACS coded datasets: CASME II and SAMM. The best result achieves 86.35\% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.Comment: 11 pages, 4 figures and 5 tables. This paper will be submitted for journal revie

    Less is More: Micro-expression Recognition from Video using Apex Frame

    Full text link
    Despite recent interest and advances in facial micro-expression research, there is still plenty room for improvement in terms of micro-expression recognition. Conventional feature extraction approaches for micro-expression video consider either the whole video sequence or a part of it, for representation. However, with the high-speed video capture of micro-expressions (100-200 fps), are all frames necessary to provide a sufficiently meaningful representation? Is the luxury of data a bane to accurate recognition? A novel proposition is presented in this paper, whereby we utilize only two images per video: the apex frame and the onset frame. The apex frame of a video contains the highest intensity of expression changes among all frames, while the onset is the perfect choice of a reference frame with neutral expression. A new feature extractor, Bi-Weighted Oriented Optical Flow (Bi-WOOF) is proposed to encode essential expressiveness of the apex frame. We evaluated the proposed method on five micro-expression databases: CAS(ME)2^2, CASME II, SMIC-HS, SMIC-NIR and SMIC-VIS. Our experiments lend credence to our hypothesis, with our proposed technique achieving a state-of-the-art F1-score recognition performance of 61% and 62% in the high frame rate CASME II and SMIC-HS databases respectively.Comment: 14 pages double-column, author affiliations updated, acknowledgment of grant support adde

    Sparse MDMO: learning a discriminative feature for micro-expression recognition

    Get PDF
    Micro-expressions are the rapid movements of facial muscles that can be used to reveal concealed emotions. Recognizing them from video clips has a wide range of applications and receives increasing attention recently. Among existing methods, the main directional mean optical-flow (MDMO) feature achieves state-of-the-art performance for recognizing spontaneous micro-expressions. For a video clip, the MDMO feature is computed by averaging a set of atomic features frame-by-frame. Despite its simplicity, the average operation in MDMO can easily lose the underlying manifold structure inherent in the feature space. In this paper we propose a sparse MDMO feature that learns an effective dictionary from a micro-expression video dataset. In particular, a new distance metric is proposed based on the sparsity of sample points in the MDMO feature space, which can efficiently reveal the underlying manifold structure. The proposed sparse MDMO feature is obtained by incorporating this new metric into the classic graph regularized sparse coding (GraphSC) scheme. We evaluate sparse MDMO and four representative features (LBP-TOP, STCLQP, MDMO and FDM) on three spontaneous micro-expression datasets (SMIC, CASME and CASME II). The results show that sparse MDMO outperforms these representative features

    Four dimensions characterize comprehensive trait judgments of faces

    Get PDF
    People readily attribute many traits to faces: some look beautiful, some competent, some aggressive. These snap judgments have important consequences in real life, ranging from success in political elections to decisions in courtroom sentencing. Modern psychological theories argue that the hundreds of different words people use to describe others from their faces are well captured by only two or three dimensions, such as valence and dominance, a highly influential framework that has been the basis for numerous studies in social and developmental psychology, social neuroscience, and in engineering applications. However, all prior work has used only a small number of words (12 to 18) to derive underlying dimensions, limiting conclusions to date. Here we employed deep neural networks to select a comprehensive set of 100 words that are representative of the trait words people use to describe faces, and to select a set of 100 faces. In two large-scale, preregistered studies we asked participants to rate the 100 faces on the 100 words (obtaining 2,850,000 ratings from 1,710 participants), and discovered a novel set of four psychological dimensions that best explain trait judgments of faces: warmth, competence, femininity, and youth. We reproduced these four dimensions across different regions around the world, in both aggregated and individual-level data. These results provide a new and most comprehensive characterization of face judgments, and reconcile prior work on face perception with work in social cognition and personality psychology

    Micro-expression recognition base on optical flow features and improved MobileNetV2

    Get PDF
    open access articleWhen a person tries to conceal emotions, real emotions will manifest themselves in the form of micro-expressions. Research on facial micro-expression recognition is still extremely challenging in the field of pattern recognition. This is because it is difficult to implement the best feature extraction method to cope with micro-expressions with small changes and short duration. Most methods are based on hand-crafted features to extract subtle facial movements. In this study, we introduce a method that incorporates optical flow and deep learning. First, we take out the onset frame and the apex frame from each video sequence. Then, the motion features between these two frames are extracted using the optical flow method. Finally, the features are inputted into an improved MobileNetV2 model, where SVM is applied to classify expressions. In order to evaluate the effectiveness of the method, we conduct experiments on the public spontaneous micro-expression database CASME II. Under the condition of applying the leave-one-subject-out cross-validation method, the recognition accuracy rate reaches 53.01%, and the F-score reaches 0.5231. The results show that the proposed method can significantly improve the micro-expression recognition performance

    SMEConvNet: A Convolutional Neural Network for Spotting Spontaneous Facial Micro-Expression from Long Videos

    Get PDF
    Micro-expression is a subtle and involuntary facial expression that may reveal the hidden emotion of human beings. Spotting micro-expression means to locate the moment when the microexpression happens, which is a primary step for micro-expression recognition. Previous work in microexpression expression spotting focus on spotting micro-expression from short video, and with hand-crafted features. In this paper, we present a methodology for spotting micro-expression from long videos. Specifically, a new convolutional neural network named as SMEConvNet (Spotting Micro-Expression Convolutional Network) was designed for extracting features from video clips, which is the first time that deep learning is used in micro-expression spotting. Then a feature matrix processing method was proposed for spotting the apex frame from long video, which uses a sliding window and takes the characteristics of micro-expression into account to search the apex frame. Experimental results demonstrate that the proposed method can achieve better performance than existing state-of-art methods
    corecore