8 research outputs found

    Towards High-Fidelity 3D Face Reconstruction from In-the-Wild Images Using Graph Convolutional Networks

    Full text link
    3D Morphable Model (3DMM) based methods have achieved great success in recovering 3D face shapes from single-view images. However, the facial textures recovered by such methods lack the fidelity as exhibited in the input images. Recent work demonstrates high-quality facial texture recovering with generative networks trained from a large-scale database of high-resolution UV maps of face textures, which is hard to prepare and not publicly available. In this paper, we introduce a method to reconstruct 3D facial shapes with high-fidelity textures from single-view images in-the-wild, without the need to capture a large-scale face texture database. The main idea is to refine the initial texture generated by a 3DMM based method with facial details from the input image. To this end, we propose to use graph convolutional networks to reconstruct the detailed colors for the mesh vertices instead of reconstructing the UV map. Experiments show that our method can generate high-quality results and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.Comment: Accepted to CVPR 2020. The source code is available at https://github.com/FuxiCV/3D-Face-GCN

    Reconstructing 3D face shapes from single 2D images using an adaptive deformation model

    Get PDF
    The Representational Power (RP) of an example-based model is its capability to depict a new 3D face for a given 2D face image. In this contribution, a novel approach is proposed to increase the RP of the 3D reconstruction PCA-based model by deforming a set of examples in the training dataset. By adding these deformed samples together with the original training samples we gain more RP. A 3D PCA-based model is adapted for each new input face image by deforming 3D faces in the training data set. This adapted model is used to reconstruct the 3D face shape for the given input 2D near frontal face image. Our experimental results justify that the proposed adaptive model considerably improves the RP of the conventional PCA-based model

    In search of Robert Bruce, part I: craniofacial analysis of the skull excavated at Dunfermline in 1819

    Get PDF
    Robert Bruce, king of Scots, is a significant figure in Scottish history, and his facial appearance will have been key to his status, power and resilience as a leader. This paper is the first in a series that discusses the burial and skeletal remains excavated at Dunfermline in 1819. Parts II and III discuss the evidence relating to whether or not the burial vault and skeleton belong to Robert Bruce, and Part I analyses and interprets the historical records and skeletal structure in order to produce a depiction of the facial appearance of Robert Bruce

    Adaptive face modelling for reconstructing 3D face shapes from single 2D images

    Get PDF
    Example-based statistical face models using principle component analysis (PCA) have been widely deployed for three-dimensional (3D) face reconstruction and face recognition. The two common factors that are generally concerned with such models are the size of the training dataset and the selection of different examples in the training set. The representational power (RP) of an example-based model is its capability to depict a new 3D face for a given 2D face image. The RP of the model can be increased by correspondingly increasing the number of training samples. In this contribution, a novel approach is proposed to increase the RP of the 3D face reconstruction model by deforming a set of examples in the training dataset. A PCA-based 3D face model is adapted for each new near frontal input face image to reconstruct the 3D face shape. Further an extended Tikhonov regularisation method has been

    Reinforced Learning for Label-Efficient 3D Face Reconstruction

    Get PDF
    3D face reconstruction plays a major role in many human-robot interaction systems, from automatic face authentication to human-computer interface-based entertainment. To improve robustness against occlusions and noise, 3D face reconstruction networks are often trained on a set of in-the-wild face images preferably captured along different viewpoints of the subject. However, collecting the required large amounts of 3D annotated face data is expensive and time-consuming. To address the high annotation cost and due to the importance of training on a useful set, we propose an Active Learning (AL) framework that actively selects the most informative and representative samples to be labeled. To the best of our knowledge, this paper is the first work on tackling active learning for 3D face reconstruction to enable a label-efficient training strategy. In particular, we propose a Reinforcement Active Learning approach in conjunction with a clustering-based pooling strategy to select informative view-points of the subjects. Experimental results on 300W-LP and AFLW2000 datasets demonstrate that our proposed method is able to 1) efficiently select the most influencing view-points for labeling and outperforms several baseline AL techniques and 2) further improve the performance of a 3D Face Reconstruction network trained on the full dataset

    In Search of Robert Bruce, Part I: Craniofacial Analysis of the Skull excavated at Dunfermline in 1819

    Get PDF
    Robert Bruce, king of Scots, is a significant figure in Scottish history, and his facial appearance will have been key to his status, power and resilience as a leader. This paper is the first in a series that discusses the burial and skeletal remains excavated at Dunfermline in 1819. Parts II and III discuss the evidence relating to whether or not the burial vault and skeleton belong to Robert Bruce, and Part I analyses and interprets the historical records and skeletal structure in order to produce a depiction of the facial appearance of Robert Bruce

    The affordances of 3D and 4D digital technologies for computerized facial depiction

    Get PDF
    3D digital technologies have advanced rapidly over recent decades and they can now afford new ways of interacting with anatomical and cultural artefacts. Such technologies allow for interactive investigation of visible or non-observable surfaces, haptic generation of content and tactile experiences with digital and physical representations. These interactions and technical advances often facilitate the generation of new knowledge through interdisciplinary and sympathetic approaches. Scientific and public understanding of anatomy are often enhanced by clinical imaging technologies, 3D surface scanning techniques, 3D haptic modelling methods and 3D fabrication systems. These digital and haptic technologies are seen as non-invasive and allow scientists, artists and the public to become active investigators in the visualisation of, and interaction with, human anatomy, remains and histories. Face Lab is a Liverpool John Moores University research group that focuses on creative digital face research; specifically the further development of a 3D computerized craniofacial depiction system, utilizing 3D digital technologies in facial analysis and identification of human remains for forensic investigation, or historical figures for archaeological interpretation. This chapter explores the affordances of such interactions for the non-destructive production of craniofacial depiction, through a case-study based exploration of Face Lab workflow

    Face recognition with the RGB-D sensor

    Get PDF
    Face recognition in unconstrained environments is still a challenge, because of the many variations of the facial appearance due to changes in head pose, lighting conditions, facial expression, age, etc. This work addresses the problem of face recognition in the presence of 2D facial appearance variations caused by 3D head rotations. It explores the advantages of the recently developed consumer-level RGB-D cameras (e.g. Kinect). These cameras provide color and depth images at the same rate. They are affordable and easy to use, but the depth images are noisy and in low resolution, unlike laser scanned depth images. The proposed approach to face recognition is able to deal with large head pose variations using RGB-D face images. The method uses the depth information to correct the pose of the face. It does not need to learn a generic face model or make complex 3D-2D registrations. It is simple and fast, yet able to deal with large pose variations and perform pose-invariant face recognition. Experiments on a public database show that the presented approach is effective and efficient under significant pose changes. Also, the idea is used to develop a face recognition software that is able to achieve real-time face recognition in the presence of large yaw rotations using the Kinect sensor. It is shown in real-time how this method improves recognition accuracy and confidence level. This study demonstrates that RGB-D sensors are a promising tool that can lead to the development of robust pose-invariant face recognition systems under large pose variations
    corecore