3 research outputs found

    Morphable Face Models - An Open Framework

    Full text link
    In this paper, we present a novel open-source pipeline for face registration based on Gaussian processes as well as an application to face image analysis. Non-rigid registration of faces is significant for many applications in computer vision, such as the construction of 3D Morphable face models (3DMMs). Gaussian Process Morphable Models (GPMMs) unify a variety of non-rigid deformation models with B-splines and PCA models as examples. GPMM separate problem specific requirements from the registration algorithm by incorporating domain-specific adaptions as a prior model. The novelties of this paper are the following: (i) We present a strategy and modeling technique for face registration that considers symmetry, multi-scale and spatially-varying details. The registration is applied to neutral faces and facial expressions. (ii) We release an open-source software framework for registration and model-building, demonstrated on the publicly available BU3D-FE database. The released pipeline also contains an implementation of an Analysis-by-Synthesis model adaption of 2D face images, tested on the Multi-PIE and LFW database. This enables the community to reproduce, evaluate and compare the individual steps of registration to model-building and 3D/2D model fitting. (iii) Along with the framework release, we publish a new version of the Basel Face Model (BFM-2017) with an improved age distribution and an additional facial expression model

    3D statistical shape analysis of the face in Apert syndrome

    Get PDF
    Timely diagnosis of craniofacial syndromes as well as adequate timing and choice of surgical technique are essential for proper care management. Statistical shape models and machine learning approaches are playing an increasing role in Medicine and have proven its usefulness. Frameworks that automate processes have become more popular. The use of 2D photographs for automated syndromic identification has shown its potential with the Face2Gene application. Yet, using 3D shape information without texture has not been studied in such depth. Moreover, the use of these models to understand shape change during growth and its applicability for surgical outcome measurements have not been analysed at length. This thesis presents a framework using state-of-the-art machine learning and computer vision algorithms to explore possibilities for automated syndrome identification based on shape information only. The purpose of this was to enhance understanding of the natural development of the Apert syndromic face and its abnormality as compared to a normative group. An additional method was used to objectify changes as result of facial bipartition distraction, a common surgical correction technique, providing information on the successfulness and on inadequacies in terms of facial normalisation. Growth curves were constructed to further quantify facial abnormalities in Apert syndrome over time along with 3D shape models for intuitive visualisation of the shape variations. Post-operative models were built and compared with age-matched normative data to understand where normalisation is coming short. The findings in this thesis provide markers for future translational research and may accelerate the adoption of the next generation diagnostics and surgical planning tools to further supplement the clinical decision-making process and ultimately to improve patients’ quality of life

    A Robust Multilinear Model Learning Framework for 3D Faces

    Get PDF
    International audienceMultilinear models are widely used to represent the statistical variations of 3D human faces as they decouple shape changes due to identity and expression. Existing methods to learn a multilinear face model degrade if not every person is captured in every expression, if face scans are noisy or partially occluded, if expressions are erroneously labeled, or if the vertex correspondence is inaccurate. These limitations impose requirements on the training data that disqualify large amounts of available 3D face data from being usable to learn a multilinear model. To overcome this, we introduce the first framework to robustly learn a multilinear model from 3D face databases with missing data, corrupt data, wrong semantic correspondence , and inaccurate vertex correspondence. To achieve this robustness to erroneous training data, our framework jointly learns a multilinear model and fixes the data. We evaluate our framework on two publicly available 3D face databases, and show that our framework achieves a data completion accuracy that is comparable to state-of-the-art tensor completion methods. Our method reconstructs corrupt data more accurately than state-of-the-art methods, and improves the quality of the learned model significantly for erroneously labeled expressions
    corecore