7,543 research outputs found

    3D Face Reconstruction from Light Field Images: A Model-free Approach

    Full text link
    Reconstructing 3D facial geometry from a single RGB image has recently instigated wide research interest. However, it is still an ill-posed problem and most methods rely on prior models hence undermining the accuracy of the recovered 3D faces. In this paper, we exploit the Epipolar Plane Images (EPI) obtained from light field cameras and learn CNN models that recover horizontal and vertical 3D facial curves from the respective horizontal and vertical EPIs. Our 3D face reconstruction network (FaceLFnet) comprises a densely connected architecture to learn accurate 3D facial curves from low resolution EPIs. To train the proposed FaceLFnets from scratch, we synthesize photo-realistic light field images from 3D facial scans. The curve by curve 3D face estimation approach allows the networks to learn from only 14K images of 80 identities, which still comprises over 11 Million EPIs/curves. The estimated facial curves are merged into a single pointcloud to which a surface is fitted to get the final 3D face. Our method is model-free, requires only a few training samples to learn FaceLFnet and can reconstruct 3D faces with high accuracy from single light field images under varying poses, expressions and lighting conditions. Comparison on the BU-3DFE and BU-4DFE datasets show that our method reduces reconstruction errors by over 20% compared to recent state of the art

    Capture, Learning, and Synthesis of 3D Speaking Styles

    Full text link
    Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input - even speech in languages other than English - and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at http://voca.is.tue.mpg.de.Comment: To appear in CVPR 201

    4DFAB: a large scale 4D facial expression database for biometric applications

    Get PDF
    The progress we are currently witnessing in many computer vision applications, including automatic face analysis, would not be made possible without tremendous efforts in collecting and annotating large scale visual databases. To this end, we propose 4DFAB, a new large scale database of dynamic high-resolution 3D faces (over 1,800,000 3D meshes). 4DFAB contains recordings of 180 subjects captured in four different sessions spanning over a five-year period. It contains 4D videos of subjects displaying both spontaneous and posed facial behaviours. The database can be used for both face and facial expression recognition, as well as behavioural biometrics. It can also be used to learn very powerful blendshapes for parametrising facial behaviour. In this paper, we conduct several experiments and demonstrate the usefulness of the database for various applications. The database will be made publicly available for research purposes

    3D Dynamic Expression Recognition Based on a Novel Deformation Vector Field and Random Forest

    Get PDF
    International audienceThis paper proposes a new method for facial motion extraction to represent, learn and recognize observed expressions, from 4D video sequences. The approach called Deformation Vector Field (DVF) is based on Riemannian facial shape analysis and captures densely dynamic information from the entire face. The resulting temporal vector field is used to build the feature vector for expression recognition from 3D dynamic faces. By applying LDA-based feature space transformation for dimensionality reduction which is followed by a Multiclass Random Forest learning algorithm, the proposed approach achieved 93% average recognition rate on BU-4DFE database and outperforms state-of-art approaches

    Introducing FoxFaces: a 3-in-1 head dataset

    Get PDF
    International audienceWe introduce a new test collection named FoxFaces, dedicated to researchers in face recognition and analysis. The creation of this dataset was motivated by a lack encountered in the existing 3D/4D datasets. FoxFaces contains 3 face datasets obtained with several devices. Faces are captured with different changes in pose, expression and illumination. The presented collection is unique in two aspects: the acquisition is performed using three little constrained devices offering 2D, depth and stereo information on faces. In addition, it contains both still images and videos allowing static and dynamic face analysis. Hence, our dataset can be an interesting resource for the evaluation of 2D, 3D and bimodal algorithms on face recognition under adverse conditions as well as facial expression recognition and pose estimation algorithms in static and dynamic domains (images and videos). Stereo, color, and range images and videos of 64 adult human subjects are acquired. Acquisitions are accompanied with information about the subjects identity, gender, facial expression, approximate pose orientation and the coordinates of some manually located facial fiducial points

    Automatic facial expression tracking for 4D range scans

    Get PDF
    This paper presents a fully automatic approach of spatio-temporal facial expression tracking for 4D range scans without any manual interventions (such as specifying landmarks). The approach consists of three steps: rigid registration, facial model reconstruction, and facial expression tracking. A Scaling Iterative Closest Points (SICP) algorithm is introduced to compute the optimal rigid registration between a template facial model and a range scan with consideration of the scale problem. A deformable model, physically based on thin shells, is proposed to faithfully reconstruct the facial surface and texture from that range data. And then the reconstructed facial model is used to track facial expressions presented in a sequence of range scans by the deformable model
    • …
    corecore