25,331 research outputs found

    Analysis of Range Images Used in 3D Facial Expression Recognition Systems

    Get PDF
    With the creation of BU-3DFE database the research on 3D facial expression recognition has been fostered; however, it is limited by the development of 3D algorithms. Range image is the strategy for solving the problems of 3D recognition based on 2D algorithms. Recently, there are some methods to capture range images, but they are always combined with the preprocess, registration, etc. stages, so it is hard to tell which of these generated range images is of higher quality. This paper introduces two kinds of range images and selects different kinds of features based on different levels of expressions to validate the performances of proposed range images; two other kinds of range images based on previously used nose tip detection methods are applied to compare the quality of generated range images; and finally some recently published works on 3D facial expression recognition are listed for comparison. With the experimental results, we can see that the performances of two proposed range images with different kinds of features are all higher than 88 % which is remarkable compared with the most recently published methods for 3D facial expression recognition; the analysis of the different kinds of facial expressions shows that the proposed range images do not lose primary discriminative information for recognition; the performances of range images using different kinds of nose tip detection methods are almost the same what means that the nose tip detection is not decisive to the quality of range images; moreover, the proposed range images can be captured without any manual intervention what is eagerly required in safety systems

    Integrating Range and Texture Information for 3D Face Recognition

    Get PDF
    The performance of face recognition systems that use two-dimensional images depends on consistent conditions w.r.t. lighting, pose, and facial appearance. We are developing a face recognition system that utilizes three-dimensional shape information to make the system more robust to arbitrary view, lighting, and facial appearance. For each subject, a 3D face model is constructed by integrating several 2.5D face scans from different viewpoints. A 2.5D scan is composed of one range image along with a registered 2D color image. The recognition engine consists of two components, surface matching and appearance-based matching. The surface matching component is based on a modified Iterative Closest Point (ICP) algorithm. The candidate list used for appearance matching is dynamically generated based on the output of the surface matching component, which reduces the complexity of the appearance-based matching stage. The 3D model in the gallery is used to synthesize new appearance samples with pose and illumination variations that are used for discriminant subspace analysis. The weighted sum rule is applied to combine the two matching components. A hierarchical matching structure is designed to further improve the system performance in both accuracy and efficiency. Experimental results are given for matching a database of 100 3D face models with 598 2.5D independent test scans acquired in different pose and lighting conditions, and with some smiling expression. The results show the feasibility of the proposed matching scheme. 1

    Towards a comprehensive 3D dynamic facial expression database

    Get PDF
    Human faces play an important role in everyday life, including the expression of person identity, emotion and intentionality, along with a range of biological functions. The human face has also become the subject of considerable research effort, and there has been a shift towards understanding it using stimuli of increasingly more realistic formats. In the current work, we outline progress made in the production of a database of facial expressions in arguably the most realistic format, 3D dynamic. A suitable architecture for capturing such 3D dynamic image sequences is described and then used to record seven expressions (fear, disgust, anger, happiness, surprise, sadness and pain) by 10 actors at 3 levels of intensity (mild, normal and extreme). We also present details of a psychological experiment that was used to formally evaluate the accuracy of the expressions in a 2D dynamic format. The result is an initial, validated database for researchers and practitioners. The goal is to scale up the work with more actors and expression types

    Facial Expression Recognition

    Get PDF
    • …
    corecore