9 research outputs found

    Automatic Analysis of Facial Expressions Based on Deep Covariance Trajectories

    Get PDF
    In this paper, we propose a new approach for facial expression recognition using deep covariance descriptors. The solution is based on the idea of encoding local and global Deep Convolutional Neural Network (DCNN) features extracted from still images, in compact local and global covariance descriptors. The space geometry of the covariance matrices is that of Symmetric Positive Definite (SPD) matrices. By conducting the classification of static facial expressions using Support Vector Machine (SVM) with a valid Gaussian kernel on the SPD manifold, we show that deep covariance descriptors are more effective than the standard classification with fully connected layers and softmax. Besides, we propose a completely new and original solution to model the temporal dynamic of facial expressions as deep trajectories on the SPD manifold. As an extension of the classification pipeline of covariance descriptors, we apply SVM with valid positive definite kernels derived from global alignment for deep covariance trajectories classification. By performing extensive experiments on the Oulu-CASIA, CK+, and SFEW datasets, we show that both the proposed static and dynamic approaches achieve state-of-the-art performance for facial expression recognition outperforming many recent approaches.Comment: A preliminary version of this work appeared in "Otberdout N, Kacem A, Daoudi M, Ballihi L, Berretti S. Deep Covariance Descriptors for Facial Expression Recognition, in British Machine Vision Conference 2018, BMVC 2018, Northumbria University, Newcastle, UK, September 3-6, 2018. ; 2018 :159." arXiv admin note: substantial text overlap with arXiv:1805.0386

    Facial Expression Recognition Using New Feature Extraction Algorithm

    Get PDF
    This paper proposes a method for facial expression recognition. Facial feature vectors are generated from keypoint descriptors using Speeded-Up Robust Features. Each facial feature vector is then normalized and next the probability density function descriptor is generated. The distance between two probability density function descriptors is calculated using Kullback Leibler divergence. Mathematical equation is employed to select certain practicable probability density function descriptors for each grid, which are used as the initial classification. Subsequently, the corresponding weight of the class for each grid is determined using a weighted majority voting classifier. The class with the largest weight is output as the recognition result. The proposed method shows excellent performance when applied to the Japanese Female Facial Expression database

    Dheergayu: Clinical Depression Monitoring Assistant

    Get PDF
    Depression is identified as one of the most common mental health disorders in the world. Depression not only impacts the patient but also their families and relatives. If not properly treated, due to these reasons it leads people to hazardous situations. Nonetheless existing clinical diagnosis tools for monitoring illness trajectory are inadequate. Traditionally, psychiatrists use one to one interaction assessments to diagnose depression levels. However, these cliniccentered services can pose several operational challenges. In order to monitor clinical depressive disorders, patients are required to travel regularly to a clinical center within its limited operating hours. These procedures are highly resource intensive because they require skilled clinician and laboratories. To address these issues, we propose a personal and ubiquitous sensing technologies, such as fitness trackers and smartphones, which can monitor human vitals in an unobtrusive manner

    Automatic Analysis of Facial Expressions Based on Deep Covariance Trajectories

    Get PDF
    International audienceIn this paper, we propose a new approach for facial expression recognition using deep covariance descriptors. The solution is based on the idea of encoding local and global Deep Convolutional Neural Network (DCNN) features extracted from still images, in compact local and global covariance descriptors. The space geometry of the covariance matrices is that of Symmetric Positive Definite (SPD) matrices. By conducting the classification of static facial expressions using Support Vector Machine (SVM) with a valid Gaussian kernel on the SPD manifold, we show that deep covariance descriptors are more effective than the standard classification with fully connected layers and softmax. Besides, we propose a completely new and original solution to model the temporal dynamic of facial expressions as deep trajectories on the SPD manifold. As an extension of the classification pipeline of covariance descriptors, we apply SVM with valid positive definite kernels derived from global alignment for deep covariance trajectories classification. By performing extensive experiments on the Oulu-CASIA, CK+, SFEW and AFEW datasets, we show that both the proposed static and dynamic approaches achieve state-of-the-art performance for facial expression recognition outperforming many recent approaches

    Face and Body gesture recognition for a vision-based multimodal analyser

    Full text link
    users, computers should be able to recognize emotions, by analyzing the human's affective state, physiology and behavior. In this paper, we present a survey of research conducted on face and body gesture and recognition. In order to make human-computer interfaces truly natural, we need to develop technology that tracks human movement, body behavior and facial expression, and interprets these movements in an affective way. Accordingly in this paper, we present a framework for a vision-based multimodal analyzer that combines face and body gesture and further discuss relevant issues

    Facial expression recognition from static images

    No full text

    Improved facial expression recognition with trainable 2-D filters and support vector machines

    Get PDF
    Facial expression is one way humans convey their emotional states. Accurate recognition of facial expressions is essential in perceptual human-computer interface, robotics and mimetic games. This paper presents a novel approach to facial expression recognition from static images that combines fixed and adaptive 2-D filters in a hierarchical structure. The fixed filters are used to extract primitive features. They are followed by the adaptive filters that are trained to extract more complex facial features. Both types of filters are non-linear and are based on the biological mechanism of shunting inhibition. The features are finally classified by a support vector machine. The proposed approach is evaluated on the JAFFE database with seven types of facial expressions: anger, disgust, fear, happiness, neutral, sadness and surprise. It achieves a classification rate of 96.7%, which compares favorably with several existing techniques for facial expression recognition tested on the same database
    corecore