15 research outputs found

    Fully Automatic Expression-Invariant Face Correspondence

    Full text link
    We consider the problem of computing accurate point-to-point correspondences among a set of human face scans with varying expressions. Our fully automatic approach does not require any manually placed markers on the scan. Instead, the approach learns the locations of a set of landmarks present in a database and uses this knowledge to automatically predict the locations of these landmarks on a newly available scan. The predicted landmarks are then used to compute point-to-point correspondences between a template model and the newly available scan. To accurately fit the expression of the template to the expression of the scan, we use as template a blendshape model. Our algorithm was tested on a database of human faces of different ethnic groups with strongly varying expressions. Experimental results show that the obtained point-to-point correspondence is both highly accurate and consistent for most of the tested 3D face models

    Multilinear Wavelets: A Statistical Shape Space for Human Faces

    Full text link
    We present a statistical model for 33D human faces in varying expression, which decomposes the surface of the face using a wavelet transform, and learns many localized, decorrelated multilinear models on the resulting coefficients. Using this model we are able to reconstruct faces from noisy and occluded 33D face scans, and facial motion sequences. Accurate reconstruction of face shape is important for applications such as tele-presence and gaming. The localized and multi-scale nature of our model allows for recovery of fine-scale detail while retaining robustness to severe noise and occlusion, and is computationally efficient and scalable. We validate these properties experimentally on challenging data in the form of static scans and motion sequences. We show that in comparison to a global multilinear model, our model better preserves fine detail and is computationally faster, while in comparison to a localized PCA model, our model better handles variation in expression, is faster, and allows us to fix identity parameters for a given subject.Comment: 10 pages, 7 figures; accepted to ECCV 201

    Morphable Face Models - An Open Framework

    Full text link
    In this paper, we present a novel open-source pipeline for face registration based on Gaussian processes as well as an application to face image analysis. Non-rigid registration of faces is significant for many applications in computer vision, such as the construction of 3D Morphable face models (3DMMs). Gaussian Process Morphable Models (GPMMs) unify a variety of non-rigid deformation models with B-splines and PCA models as examples. GPMM separate problem specific requirements from the registration algorithm by incorporating domain-specific adaptions as a prior model. The novelties of this paper are the following: (i) We present a strategy and modeling technique for face registration that considers symmetry, multi-scale and spatially-varying details. The registration is applied to neutral faces and facial expressions. (ii) We release an open-source software framework for registration and model-building, demonstrated on the publicly available BU3D-FE database. The released pipeline also contains an implementation of an Analysis-by-Synthesis model adaption of 2D face images, tested on the Multi-PIE and LFW database. This enables the community to reproduce, evaluate and compare the individual steps of registration to model-building and 3D/2D model fitting. (iii) Along with the framework release, we publish a new version of the Basel Face Model (BFM-2017) with an improved age distribution and an additional facial expression model

    3D facial landmark localization for cephalometric analysis

    Get PDF
    Cephalometric analysis is an important and routine task in the medical field to assess craniofacial development and to diagnose cranial deformities and midline facial abnormalities. The advance of 3D digital techniques potentiated the development of 3D cephalometry, which includes the localization of cephalometric landmarks in the 3D models. However, manual labeling is still applied, being a tedious and time-consuming task, highly prone to intra/inter-observer variability. In this paper, a framework to automatically locate cephalometric landmarks in 3D facial models is presented. The landmark detector is divided into two stages: (i) creation of 2D maps representative of the 3D model; and (ii) landmarks' detection through a regression convolutional neural network (CNN). In the first step, the 3D facial model is transformed to 2D maps retrieved from 3D shape descriptors. In the second stage, a CNN is used to estimate a probability map for each landmark using the 2D representations as input. The detection method was evaluated in three different datasets of 3D facial models, namely the Texas 3DFR, the BU3DFE, and the Bosphorus databases. An average distance error of 2.3, 3.0, and 3.2 mm were obtained for the landmarks evaluated on each dataset. The obtained results demonstrated the accuracy of the method in different 3D facial datasets with a performance competitive to the state-of-the-art methods, allowing to prove its versability to different 3D models. Clinical Relevance - Overall, the performance of the landmark detector demonstrated its potential to be used for 3D cephalometric analysis.FCT - Fundação para a Ciência e a Tecnologia(LASI-LA/P/0104/2020

    Dense 3D Face Correspondence

    Full text link
    We present an algorithm that automatically establishes dense correspondences between a large number of 3D faces. Starting from automatically detected sparse correspondences on the outer boundary of 3D faces, the algorithm triangulates existing correspondences and expands them iteratively by matching points of distinctive surface curvature along the triangle edges. After exhausting keypoint matches, further correspondences are established by generating evenly distributed points within triangles by evolving level set geodesic curves from the centroids of large triangles. A deformable model (K3DM) is constructed from the dense corresponded faces and an algorithm is proposed for morphing the K3DM to fit unseen faces. This algorithm iterates between rigid alignment of an unseen face followed by regularized morphing of the deformable model. We have extensively evaluated the proposed algorithms on synthetic data and real 3D faces from the FRGCv2, Bosphorus, BU3DFE and UND Ear databases using quantitative and qualitative benchmarks. Our algorithm achieved dense correspondences with a mean localisation error of 1.28mm on synthetic faces and detected 1414 anthropometric landmarks on unseen real faces from the FRGCv2 database with 3mm precision. Furthermore, our deformable model fitting algorithm achieved 98.5% face recognition accuracy on the FRGCv2 and 98.6% on Bosphorus database. Our dense model is also able to generalize to unseen datasets.Comment: 24 Pages, 12 Figures, 6 Tables and 3 Algorithm

    A 3D morphable model learnt from 10,000 faces

    Get PDF
    This is the final version of the article. It is the open access version, provided by the Computer Vision Foundation. Except for the watermark, it is identical to the IEEE published version. Available from IEEE via the DOI in this record.We present Large Scale Facial Model (LSFM) - a 3D Morphable Model (3DMM) automatically constructed from 9,663 distinct facial identities. To the best of our knowledge LSFM is the largest-scale Morphable Model ever constructed, containing statistical information from a huge variety of the human population. To build such a large model we introduce a novel fully automated and robust Morphable Model construction pipeline. The dataset that LSFM is trained on includes rich demographic information about each subject, allowing for the construction of not only a global 3DMM but also models tailored for specific age, gender or ethnicity groups. As an application example, we utilise the proposed model to perform age classification from 3D shape alone. Furthermore, we perform a systematic analysis of the constructed 3DMMs that showcases their quality and descriptive power. The presented extensive qualitative and quantitative evaluations reveal that the proposed 3DMM achieves state-of-the-art results, outperforming existing models by a large margin. Finally, for the benefit of the research community, we make publicly available the source code of the proposed automatic 3DMM construction pipeline. In addition, the constructed global 3DMM and a variety of bespoke models tailored by age, gender and ethnicity are available on application to researchers involved in medically oriented research.J. Booth is funded by an EPSRC DTA from Imperial College London, and holds a Qualcomm Innovation Fellowship. A. Roussos is funded by the Great Ormond Street Hospital Childrens Charity (Face Value: W1037). The work of S. Zafeiriou was partially funded by the EPSRC project EP/J017787/1 (4D-FAB)

    Automated Facial Anthropometry Over 3D Face Surface Textured Meshes

    Get PDF
    The automation of human face measurement means facing mayor technical and technological challenges. The use of 3D scanning technology is widely accepted in the scientific community and it offers the possibility of developing non-invasive measurement techniques. However, the selection of the points that form the basis of the measurements is a task that still requires human intervention. This work introduces digital image processing methods for automatic localization of facial features. The first goal was to examine different ways to represent 3D shapes and to evaluate whether these could be used as representative features of facial attributes, in order to locate them automatically. Based on the above, a non-rigid registration procedure was developed to estimate dense point-to-point correspondence between two surfaces. The method is able to register 3D models of faces in the presence of facial expressions. Finally, a method that uses both shape and appearance information of the surface, was designed for automatic localization of a set of facial features that are the basis for determining anthropometric ratios, which are widely used in fields such as ergonomics, forensics, surgical planning, among othersResumen : La automatización de la medición del rostro humano implica afrontar grandes desafíos técnicos y tecnológicos. Una alternativa de solución que ha encontrado gran aceptación dentro de la comunidad científica, corresponde a la utilización de tecnología de digitalización 3D con lo cual ha sido posible el desarrollo de técnicas de medición no invasivas. Sin embargo, la selección de los puntos que son la base de las mediciones es una tarea que aún requiere de la intervención humana. En este trabajo se presentan métodos de procesamiento digital de imágenes para la localización automática de características faciales. Lo primero que se hizo fue estudiar diversas formas de representar la forma en 3D y cómo estas podían contribuir como características representativas de los atributos faciales con el fin de poder ubicarlos automáticamente. Con base en lo anterior, se desarrolló un método para la estimación de correspondencia densa entre dos superficies a partir de un procedimiento de registro no rígido, el cual se enfocó a modelos de rostros 3D en presencia de expresiones faciales. Por último, se plantea un método, que utiliza tanto información de la forma como de la apariencia de las superficies, para la localización automática de un conjunto de características faciales que son la base para determinar índices antropométricos ampliamente utilizados en campos tales como la ergonomía, ciencias forenses, planeación quirúrgica, entre otrosDoctorad
    corecore