46,588 research outputs found

    Web-based database for facial expression analysis

    No full text
    ABSTRACT * In the last decade, the research topic of automatic analysis of facial expressions has become a central topic in machine vision research. Nonetheless, there is a glaring lack of a comprehensive, readily accessible reference set of face images that could be used as a basis for benchmarks for efforts in the field. This lack of easily accessible, suitable, common testing resource forms the major impediment to comparing and extending the issues concerned with automatic facial expression analysis. In this paper, we discuss a number of issues that make the problem of creating a benchmark facial expression database difficult. We then present the MMI Facial Expression Database, which includes more than 1500 samples of both static images and image sequences of faces in frontal and in profile view displaying various expressions of emotion, single and multiple facial muscle activation. It has been built as a web-based direct-manipulation application, allowing easy access and easy search of the available images. This database represents the most comprehensive reference set of images for studies on facial expression analysis to date. 1

    3-D facial expression representation using statistical shape models

    Get PDF
    This poster describes a methodology for facial expressions representation, using 3-D/4-D data, based on the statistical shape modelling technology. The proposed method uses a shape space vector to model surface deformations, and a modified iterative closest point (ICP) method to calculate the point correspondence between each surface. The shape space vector is constructed using principal component analysis (PCA) computed for typical surfaces represented in a training data set. It is shown that the calculated shape space vector can be used as a significant feature for subsequent facial expression classification. Comprehensive 3-D/4-D face data sets have been used for building the deformation models and for testing, which include 3-D synthetic data generated from FaceGen Modeller® software, 3-D facial expression data caputed by a static 3-D scanner in the BU-3DFE database and 3-D video sequences captured at the ADSIP research centre using a 3dMD® dynamic 3-D scanner

    SDFE-LV: A Large-Scale, Multi-Source, and Unconstrained Database for Spotting Dynamic Facial Expressions in Long Videos

    Full text link
    In this paper, we present a large-scale, multi-source, and unconstrained database called SDFE-LV for spotting the onset and offset frames of a complete dynamic facial expression from long videos, which is known as the topic of dynamic facial expression spotting (DFES) and a vital prior step for lots of facial expression analysis tasks. Specifically, SDFE-LV consists of 1,191 long videos, each of which contains one or more complete dynamic facial expressions. Moreover, each complete dynamic facial expression in its corresponding long video was independently labeled for five times by 10 well-trained annotators. To the best of our knowledge, SDFE-LV is the first unconstrained large-scale database for the DFES task whose long videos are collected from multiple real-world/closely real-world media sources, e.g., TV interviews, documentaries, movies, and we-media short videos. Therefore, DFES tasks on SDFE-LV database will encounter numerous difficulties in practice such as head posture changes, occlusions, and illumination. We also provided a comprehensive benchmark evaluation from different angles by using lots of recent state-of-the-art deep spotting methods and hence researchers interested in DFES can quickly and easily get started. Finally, with the deep discussions on the experimental evaluation results, we attempt to point out several meaningful directions to deal with DFES tasks and hope that DFES can be better advanced in the future. In addition, SDFE-LV will be freely released for academic use only as soon as possible

    Is 2D Unlabeled Data Adequate for Recognizing Facial Expressions?

    Get PDF
    Automatic facial expression recognition is one of the important challenges for computer vision and machine learning. Despite the fact that many successes have been achieved in the recent years, several important but unresolved problems still remain. This paper describes a facial expression recognition system based on the random forest technique. Contrary to the many previous methods, the proposed system uses only very simple landmark features, with the view of a possible real-time implementation on low-cost portable devices. Both supervised and unsupervised variants of the method are presented. However, the main objective of the paper is to provide some quantitative experimental evidence behind more fundamental questions in facial articulation analysis, namely the relative significance of 3D information as oppose to 2D data only and importance of the labelled training data in the supervised learning as opposed to the unsupervised learning. The comprehensive experiments are performed on the BU-3DFE facial expression database. These experiments not only show the effectiveness of the described methods but also demonstrate that the common assumptions about facial expression recognition are debatable

    Is the 2D unlabelled data adequate for facial expressionrecognition?

    Get PDF
    Automatic facial expression recognition is one of the important challenges for computer vision and machine learning. Despite the fact that many successes have been achieved in the recent years, several important but unresolved problems still remain. This paper describes a facial expression recognition system based on the random forest technique. Contrary to the many previous methods, the proposed system uses only very simple landmark features, with the view of a possible real-time implementation on low-cost portable devices. Both supervised and unsupervised variants of the method are presented. However, the main objective of the paper is toprovide some quantitative experimental evidence behind more fundamental questions in facial articulation analysis, namely the relative significance of 3D information as oppose to 2D data only and importance of the labelled training data in the supervised learning as opposed to the unsupervised learning. The comprehensive experiments are performed on the BU-3DFE facial expression database. These experiments not only show theeffectiveness of the described methods but also demonstrate that the common assumptions about facial expression recognition are debatable

    Timing is everything: A spatio-temporal approach to the analysis of facial actions

    No full text
    This thesis presents a fully automatic facial expression analysis system based on the Facial Action Coding System (FACS). FACS is the best known and the most commonly used system to describe facial activity in terms of facial muscle actions (i.e., action units, AUs). We will present our research on the analysis of the morphological, spatio-temporal and behavioural aspects of facial expressions. In contrast with most other researchers in the field who use appearance based techniques, we use a geometric feature based approach. We will argue that that approach is more suitable for analysing facial expression temporal dynamics. Our system is capable of explicitly exploring the temporal aspects of facial expressions from an input colour video in terms of their onset (start), apex (peak) and offset (end). The fully automatic system presented here detects 20 facial points in the first frame and tracks them throughout the video. From the tracked points we compute geometry-based features which serve as the input to the remainder of our systems. The AU activation detection system uses GentleBoost feature selection and a Support Vector Machine (SVM) classifier to find which AUs were present in an expression. Temporal dynamics of active AUs are recognised by a hybrid GentleBoost-SVM-Hidden Markov model classifier. The system is capable of analysing 23 out of 27 existing AUs with high accuracy. The main contributions of the work presented in this thesis are the following: we have created a method for fully automatic AU analysis with state-of-the-art recognition results. We have proposed for the first time a method for recognition of the four temporal phases of an AU. We have build the largest comprehensive database of facial expressions to date. We also present for the first time in the literature two studies for automatic distinction between posed and spontaneous expressions

    Facial Asymmetry Analysis Based on 3-D Dynamic Scans

    Get PDF
    Facial dysfunction is a fundamental symptom which often relates to many neurological illnesses, such as stroke, Bell’s palsy, Parkinson’s disease, etc. The current methods for detecting and assessing facial dysfunctions mainly rely on the trained practitioners which have significant limitations as they are often subjective. This paper presents a computer-based methodology of facial asymmetry analysis which aims for automatically detecting facial dysfunctions. The method is based on dynamic 3-D scans of human faces. The preliminary evaluation results testing on facial sequences from Hi4D-ADSIP database suggest that the proposed method is able to assist in the quantification and diagnosis of facial dysfunctions for neurological patients
    • …
    corecore