127,539 research outputs found

    On the estimation of face recognition system performance using image variability information

    Get PDF
    The type and amount of variation that exists among images in facial image datasets significantly affects Face Recognition System Performance (FRSP). This points towards the development of an appropriate image Variability Measure (VM), as applied to face-type image datasets. Given VM, modeling of the relationship that exists between the image variability characteristics of facial image datasets and expected FRSP values, can be performed. Thus, this paper presents a novel method to quantify the overall data variability that exists in a given face image dataset. The resulting Variability Measure (VM) is then used to model FR system performance versus VM (FRSP/VM). Note that VM takes into account both the inter- and intra-subject class correlation characteristics of an image dataset. Using eleven publically available datasets of face images and four well-known FR systems, computer simulation based experimental results showed that FRSP/VM based prediction errors are confined in the region of 0 to 10%

    Predictive models for multibiometric systems

    Get PDF
    Recognizing a subject given a set of biometrics is a fundamental pattern recognition problem. This paper builds novel statistical models for multibiometric systems using geometric and multinomial distributions. These models are generic as they are only based on the similarity scores produced by a recognition system. They predict the bounds on the range of indices within which a test subject is likely to be present in a sorted set of similarity scores. These bounds are then used in the multibiometric recognition system to predict a smaller subset of subjects from the database as probable candidates for a given test subject. Experimental results show that the proposed models enhance the recognition rate beyond the underlying matching algorithms for multiple face views, fingerprints, palm prints, irises and their combinations

    Time-delay neural network for continuous emotional dimension prediction from facial expression sequences

    Get PDF
    "(c) 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works."Automatic continuous affective state prediction from naturalistic facial expression is a very challenging research topic but very important in human-computer interaction. One of the main challenges is modeling the dynamics that characterize naturalistic expressions. In this paper, a novel two-stage automatic system is proposed to continuously predict affective dimension values from facial expression videos. In the first stage, traditional regression methods are used to classify each individual video frame, while in the second stage, a Time-Delay Neural Network (TDNN) is proposed to model the temporal relationships between consecutive predictions. The two-stage approach separates the emotional state dynamics modeling from an individual emotional state prediction step based on input features. In doing so, the temporal information used by the TDNN is not biased by the high variability between features of consecutive frames and allows the network to more easily exploit the slow changing dynamics between emotional states. The system was fully tested and evaluated on three different facial expression video datasets. Our experimental results demonstrate that the use of a two-stage approach combined with the TDNN to take into account previously classified frames significantly improves the overall performance of continuous emotional state estimation in naturalistic facial expressions. The proposed approach has won the affect recognition sub-challenge of the third international Audio/Visual Emotion Recognition Challenge (AVEC2013)1

    Robust Modeling of Epistemic Mental States

    Full text link
    This work identifies and advances some research challenges in the analysis of facial features and their temporal dynamics with epistemic mental states in dyadic conversations. Epistemic states are: Agreement, Concentration, Thoughtful, Certain, and Interest. In this paper, we perform a number of statistical analyses and simulations to identify the relationship between facial features and epistemic states. Non-linear relations are found to be more prevalent, while temporal features derived from original facial features have demonstrated a strong correlation with intensity changes. Then, we propose a novel prediction framework that takes facial features and their nonlinear relation scores as input and predict different epistemic states in videos. The prediction of epistemic states is boosted when the classification of emotion changing regions such as rising, falling, or steady-state are incorporated with the temporal features. The proposed predictive models can predict the epistemic states with significantly improved accuracy: correlation coefficient (CoERR) for Agreement is 0.827, for Concentration 0.901, for Thoughtful 0.794, for Certain 0.854, and for Interest 0.913.Comment: Accepted for Publication in Multimedia Tools and Application, Special Issue: Socio-Affective Technologie

    Automatic emotional state detection using facial expression dynamic in videos

    Get PDF
    In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states. The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems
    • …
    corecore