44,330 research outputs found

    Dense 3D Face Correspondence

    Full text link
    We present an algorithm that automatically establishes dense correspondences between a large number of 3D faces. Starting from automatically detected sparse correspondences on the outer boundary of 3D faces, the algorithm triangulates existing correspondences and expands them iteratively by matching points of distinctive surface curvature along the triangle edges. After exhausting keypoint matches, further correspondences are established by generating evenly distributed points within triangles by evolving level set geodesic curves from the centroids of large triangles. A deformable model (K3DM) is constructed from the dense corresponded faces and an algorithm is proposed for morphing the K3DM to fit unseen faces. This algorithm iterates between rigid alignment of an unseen face followed by regularized morphing of the deformable model. We have extensively evaluated the proposed algorithms on synthetic data and real 3D faces from the FRGCv2, Bosphorus, BU3DFE and UND Ear databases using quantitative and qualitative benchmarks. Our algorithm achieved dense correspondences with a mean localisation error of 1.28mm on synthetic faces and detected 1414 anthropometric landmarks on unseen real faces from the FRGCv2 database with 3mm precision. Furthermore, our deformable model fitting algorithm achieved 98.5% face recognition accuracy on the FRGCv2 and 98.6% on Bosphorus database. Our dense model is also able to generalize to unseen datasets.Comment: 24 Pages, 12 Figures, 6 Tables and 3 Algorithm

    A Sparse and Locally Coherent Morphable Face Model for Dense Semantic Correspondence Across Heterogeneous 3D Faces

    Get PDF
    The 3D Morphable Model (3DMM) is a powerful statistical tool for representing 3D face shapes. To build a 3DMM, a training set of face scans in full point-to-point correspondence is required, and its modeling capabilities directly depend on the variability contained in the training data. Thus, to increase the descriptive power of the 3DMM, establishing a dense correspondence across heterogeneous scans with sufficient diversity in terms of identities, ethnicities, or expressions becomes essential. In this manuscript, we present a fully automatic approach that leverages a 3DMM to transfer its dense semantic annotation across raw 3D faces, establishing a dense correspondence between them. We propose a novel formulation to learn a set of sparse deformation components with local support on the face that, together with an original non-rigid deformation algorithm, allow the 3DMM to precisely fit unseen faces and transfer its semantic annotation. We extensively experimented our approach, showing it can effectively generalize to highly diverse samples and accurately establish a dense correspondence even in presence of complex facial expressions. The accuracy of the dense registration is demonstrated by building a heterogeneous, large-scale 3DMM from more than 9,000 fully registered scans obtained by joining three large datasets together

    Automatic landmark annotation and dense correspondence registration for 3D human facial images

    Full text link
    Dense surface registration of three-dimensional (3D) human facial images holds great potential for studies of human trait diversity, disease genetics, and forensics. Non-rigid registration is particularly useful for establishing dense anatomical correspondences between faces. Here we describe a novel non-rigid registration method for fully automatic 3D facial image mapping. This method comprises two steps: first, seventeen facial landmarks are automatically annotated, mainly via PCA-based feature recognition following 3D-to-2D data transformation. Second, an efficient thin-plate spline (TPS) protocol is used to establish the dense anatomical correspondence between facial images, under the guidance of the predefined landmarks. We demonstrate that this method is robust and highly accurate, even for different ethnicities. The average face is calculated for individuals of Han Chinese and Uyghur origins. While fully automatic and computationally efficient, this method enables high-throughput analysis of human facial feature variation.Comment: 33 pages, 6 figures, 1 tabl

    Efficient Dense 3D Reconstruction Using Image Pairs

    Get PDF
    The 3D reconstruction of a scene from 2D images is an important topic in the _x000C_eld of Computer Vision due to the high demand in various applications such as gaming, animations, face recognition, parts inspections, etc. The accuracy of a 3D reconstruction is highly dependent on the accuracy of the correspondence matching between the images. For the purpose of high accuracy of 3D reconstruction system using just two images of the scene, it is important to _x000C_nd accurate correspondence between the image pairs. In this thesis, we implement an accurate 3D reconstruction system from two images of the scene at di_x000B_erent orientation using a normal digital camera. We use epipolar geometry to improvise the performance of the initial coarse correspondence matches between the images. Finally we calculate the reprojection error of the 3D reconstruction system before and after re_x000C_ning the correspondence matches using the epipolar geometry and compare the performance between them. Even though many feature-based correspondence matching techniques provide robust matching required for 3D reconstruction, it gives only coarse correspondence matching between the images. This is not su_x000E_cient to reconstruct the detailed 3D structure of the objects. Therefore we use our improvised image matching to calculate the camera parameters and implement dense image matching using thin-plate spline interpolation, which interpolates the surface based on the initial control points obtained from coarse correspondence matches. Since the thin-plate spline interpolates highly dense points from a very few control points, the correspondence mapping between the images are not accurate. We propose a new method to improve the performance of the dense image matching using epipolar geometry and intensity based thin-plate spline interpolation. We apply the proposed method for 3D reconstruction using two images. Finally, we develop systematic evaluation for our dense 3D reconstruction system and discuss the results

    Accuracy of generic mesh conformation: the future of facial morphological analysis

    Get PDF
    Three-dimensional (3D) analysis of the face is required for the assessment of changes following surgery, to monitor the progress of pathological conditions and for the evaluation of facial growth. Sophisticated methods have been applied for the evaluation of facial morphology, the most common being dense surface correspondence. The method depends on the application of a mathematical facial mask known as the generic facial mesh for the evaluation of the characteristics of facial morphology. This study evaluated the accuracy of the conformation of generic mesh to the underlying facial morphology. The study was conducted on 10 non-patient volunteers. Thirty-four 2-mm-diameter self-adhesive, non-reflective markers were placed on each face. These were readily identifiable on the captured 3D facial image, which was captured by Di3D stereophotogrammetry. The markers helped in minimising digitisation errors during the conformation process. For each case, the face was captured six times: at rest and at the maximum movements of four facial expressions. The 3D facial image of each facial expression was analysed. Euclidean distances between the 19 corresponding landmarks on the conformed mesh and on the original 3D facial model provided a measure of the accuracy of the conformation process. For all facial expressions and all corresponding landmarks, these distances were between 0.7 and 1.7 mm. The absolute mean distances ranged from 0.73 to 1.74 mm. The mean absolute error of the conformation process was 1.13 ± 0.26 mm. The conformation of the generic facial mesh is accurate enough for clinical trial proved to be accurate enough for the analysis of the captured 3D facial images

    UV-GAN: Adversarial Facial UV Map Completion for Pose-invariant Face Recognition

    Full text link
    Recently proposed robust 3D face alignment methods establish either dense or sparse correspondence between a 3D face model and a 2D facial image. The use of these methods presents new challenges as well as opportunities for facial texture analysis. In particular, by sampling the image using the fitted model, a facial UV can be created. Unfortunately, due to self-occlusion, such a UV map is always incomplete. In this paper, we propose a framework for training Deep Convolutional Neural Network (DCNN) to complete the facial UV map extracted from in-the-wild images. To this end, we first gather complete UV maps by fitting a 3D Morphable Model (3DMM) to various multiview image and video datasets, as well as leveraging on a new 3D dataset with over 3,000 identities. Second, we devise a meticulously designed architecture that combines local and global adversarial DCNNs to learn an identity-preserving facial UV completion model. We demonstrate that by attaching the completed UV to the fitted mesh and generating instances of arbitrary poses, we can increase pose variations for training deep face recognition/verification models, and minimise pose discrepancy during testing, which lead to better performance. Experiments on both controlled and in-the-wild UV datasets prove the effectiveness of our adversarial UV completion model. We achieve state-of-the-art verification accuracy, 94.05%94.05\%, under the CFP frontal-profile protocol only by combining pose augmentation during training and pose discrepancy reduction during testing. We will release the first in-the-wild UV dataset (we refer as WildUV) that comprises of complete facial UV maps from 1,892 identities for research purposes
    corecore