1,770 research outputs found

    Face recognition with variation in pose angle using face graphs

    Get PDF
    Automatic recognition of human faces is an important and growing field. Several real-world applications have started to rely on the accuracy of computer-based face recognition systems for their own performance in terms of efficiency, safety and reliability. Many algorithms have already been established in terms of frontal face recognition, where the person to be recognized is looking directly at the camera. More recently, methods for non-frontal face recognition have been proposed. These include work related to 3D rigid face models, component-based 3D morphable models, eigenfaces and elastic bunched graph matching (EBGM). This thesis extends recognition algorithm based on EBGM to establish better face recognition across pose variation. Facial features are localized using active shape models and face recognition is based on elastic bunch graph matching. Recognition is performed by comparing feature descriptors based on Gabor wavelets for various orientations and scales, called jets. Two novel recognition schemes, feature weighting and jet-mapping, are proposed for improved performance of the base scheme, and a combination of the two schemes is considered as a further enhancement. The improvements in performance have been evaluated by studying recognition rates on an existing database and comparing the results with the base recognition scheme over which the schemes have been developed. Improvement of up to 20% has been observed for face pose variation as large as 45°

    A 3D Face Modelling Approach for Pose-Invariant Face Recognition in a Human-Robot Environment

    Full text link
    Face analysis techniques have become a crucial component of human-machine interaction in the fields of assistive and humanoid robotics. However, the variations in head-pose that arise naturally in these environments are still a great challenge. In this paper, we present a real-time capable 3D face modelling framework for 2D in-the-wild images that is applicable for robotics. The fitting of the 3D Morphable Model is based exclusively on automatically detected landmarks. After fitting, the face can be corrected in pose and transformed back to a frontal 2D representation that is more suitable for face recognition. We conduct face recognition experiments with non-frontal images from the MUCT database and uncontrolled, in the wild images from the PaSC database, the most challenging face recognition database to date, showing an improved performance. Finally, we present our SCITOS G5 robot system, which incorporates our framework as a means of image pre-processing for face analysis

    Using 3D morphable models for face recognition in video

    Get PDF
    The 3D Morphable Face Model (3DMM) has been used for over a decade for creating 3D models from single images of faces. This model is based on a PCA model of the 3D shape and texture generated from a limited number of 3D scans. The goal of fitting a 3DMM to an image is to find the model coefficients, the lighting and other imaging variables from which we can remodel that image as accurately as possible. The model coefficients consist of texture and of shape descriptors, and can without further processing be used in verification and recognition experiments. Until now little research has been performed into the influence of the diverse parameters of the 3DMM on the recognition performance. In this paper we will introduce a Bayesian-based method for texture backmapping from multiple images. Using the information from multiple (non-frontal) views we construct a frontal view which can be used as input to 2D face recognition software. We also show how the number of triangles used in the fitting proces influences the recognition performance using the shape descriptors. The verification results of the 3DMM are compared to state-of-the-art 2D face recognition software on the MultiPIE dataset. The 2D FR software outperforms the Morphable Model, but the Morphable Model can be useful as a preprocesser to synthesize a frontal view from a non-frontal view and also combine images with multiple views to a single frontal view. We show results for this preprocessing technique by using an average face shape, a fitted face shape, with a MM texture, with the original texture and with a hybrid texture. The preprocessor has improved the verification results significantly on the dataset

    Fitting a 3D Morphable Model to Edges: A Comparison Between Hard and Soft Correspondences

    Get PDF
    We propose a fully automatic method for fitting a 3D morphable model to single face images in arbitrary pose and lighting. Our approach relies on geometric features (edges and landmarks) and, inspired by the iterated closest point algorithm, is based on computing hard correspondences between model vertices and edge pixels. We demonstrate that this is superior to previous work that uses soft correspondences to form an edge-derived cost surface that is minimised by nonlinear optimisation.Comment: To appear in ACCV 2016 Workshop on Facial Informatic

    Morphable Face Models - An Open Framework

    Full text link
    In this paper, we present a novel open-source pipeline for face registration based on Gaussian processes as well as an application to face image analysis. Non-rigid registration of faces is significant for many applications in computer vision, such as the construction of 3D Morphable face models (3DMMs). Gaussian Process Morphable Models (GPMMs) unify a variety of non-rigid deformation models with B-splines and PCA models as examples. GPMM separate problem specific requirements from the registration algorithm by incorporating domain-specific adaptions as a prior model. The novelties of this paper are the following: (i) We present a strategy and modeling technique for face registration that considers symmetry, multi-scale and spatially-varying details. The registration is applied to neutral faces and facial expressions. (ii) We release an open-source software framework for registration and model-building, demonstrated on the publicly available BU3D-FE database. The released pipeline also contains an implementation of an Analysis-by-Synthesis model adaption of 2D face images, tested on the Multi-PIE and LFW database. This enables the community to reproduce, evaluate and compare the individual steps of registration to model-building and 3D/2D model fitting. (iii) Along with the framework release, we publish a new version of the Basel Face Model (BFM-2017) with an improved age distribution and an additional facial expression model

    Combining and Steganography of 3D Face Textures

    Get PDF
    One of the serious issues in communication between people is hiding information from others, and the best way for this, is deceiving them. Since nowadays face images are mostly used in three dimensional format, in this paper we are going to steganography 3D face images, detecting which by curious people will be impossible. As in detecting face only its texture is important, we separate texture from shape matrices, for eliminating half of the extra information, steganography is done only for face texture, and for reconstructing 3D face, we can use any other shape. Moreover, we will indicate that, by using two textures, how two 3D faces can be combined. For a complete description of the process, first, 2D faces are used as an input for building 3D faces, and then 3D textures are hidden within other images.Comment: 6 pages, 10 figures, 16 equations, 5 section
    • …
    corecore