1,448 research outputs found
Dense 3D Face Correspondence
We present an algorithm that automatically establishes dense correspondences
between a large number of 3D faces. Starting from automatically detected sparse
correspondences on the outer boundary of 3D faces, the algorithm triangulates
existing correspondences and expands them iteratively by matching points of
distinctive surface curvature along the triangle edges. After exhausting
keypoint matches, further correspondences are established by generating evenly
distributed points within triangles by evolving level set geodesic curves from
the centroids of large triangles. A deformable model (K3DM) is constructed from
the dense corresponded faces and an algorithm is proposed for morphing the K3DM
to fit unseen faces. This algorithm iterates between rigid alignment of an
unseen face followed by regularized morphing of the deformable model. We have
extensively evaluated the proposed algorithms on synthetic data and real 3D
faces from the FRGCv2, Bosphorus, BU3DFE and UND Ear databases using
quantitative and qualitative benchmarks. Our algorithm achieved dense
correspondences with a mean localisation error of 1.28mm on synthetic faces and
detected anthropometric landmarks on unseen real faces from the FRGCv2
database with 3mm precision. Furthermore, our deformable model fitting
algorithm achieved 98.5% face recognition accuracy on the FRGCv2 and 98.6% on
Bosphorus database. Our dense model is also able to generalize to unseen
datasets.Comment: 24 Pages, 12 Figures, 6 Tables and 3 Algorithm
Gender recognition from facial images: Two or three dimensions?
© 2016 Optical Society of America. This paper seeks to compare encoded features from both two-dimensional (2D) and three-dimensional (3D) face images in order to achieve automatic gender recognition with high accuracy and robustness. The Fisher vector encoding method is employed to produce 2D, 3D, and fused features with escalated discriminative power. For 3D face analysis, a two-source photometric stereo (PS) method is introduced that enables 3D surface reconstructions with accurate details as well as desirable efficiency. Moreover, a 2D + 3D imaging device, taking the two-source PS method as its core, has been developed that can simultaneously gather color images for 2D evaluations and PS images for 3D analysis. This system inherits the superior reconstruction accuracy from the standard (three or more light) PS method but simplifies the reconstruction algorithm as well as the hardware design by only requiring two light sources. It also offers great potential for facilitating human computer interaction by being accurate, cheap, efficient, and nonintrusive. Ten types of low-level 2D and 3D features have been experimented with and encoded for Fisher vector gender recognition. Evaluations of the Fisher vector encoding method have been performed on the FERET database, Color FERET database, LFW database, and FRGCv2 database, yielding 97.7%, 98.0%, 92.5%, and 96.7% accuracy, respectively. In addition, the comparison of 2D and 3D features has been drawn from a self-collected dataset, which is constructed with the aid of the 2D + 3D imaging device in a series of data capture experiments. With a variety of experiments and evaluations, it can be proved that the Fisher vector encoding method outperforms most state-of-the-art gender recognition methods. It has also been observed that 3D features reconstructed by the two-source PS method are able to further boost the Fisher vector gender recognition performance, i.e., up to a 6% increase on the self-collected database
2D and 3D computer vision analysis of gaze, gender and age
Human-Computer Interaction (HCI) has been an active research area for over four decades. Research studies and commercial designs in this area have been largely facilitated by the visual modality which brings diversified functionality and improved usability to HCI interfaces by employing various computer vision techniques. This thesis explores a number of facial cues, such as gender, age and gaze, by performing 2D and 3D based computer vision analysis. The ultimate aim is to create a natural HCI strategy that can fulfil user expectations, augment user satisfaction and enrich user experience by understanding user characteristics and behaviours. To this end, salient features have been extracted and analysed from 2D and 3D face representations; 3D reconstruction algorithms and their compatible real-world imaging systems have been investigated; case study HCI systems have been designed to demonstrate the reliability, robustness, and applicability of the proposed method.More specifically, an unsupervised approach has been proposed to localise eye centres in images and videos accurately and efficiently. This is achieved by utilisation of two types of geometric features and eye models, complemented by an iris radius constraint and a selective oriented gradient filter specifically tailored to this modular scheme. This approach resolves challenges such as interfering facial edges, undesirable illumination conditions, head poses, and the presence of facial accessories and makeup. Tested on 3 publicly available databases (the BioID database, the GI4E database and the extended Yale Face Database b), and a self-collected database, this method outperforms all the methods in comparison and thus proves to be highly accurate and robust. Based on this approach, a gaze gesture recognition algorithm has been designed to increase the interactivity of HCI systems by encoding eye saccades into a communication channel similar to the role of hand gestures. As well as analysing eye/gaze data that represent user behaviours and reveal user intentions, this thesis also investigates the automatic recognition of user demographics such as gender and age. The Fisher Vector encoding algorithm is employed to construct visual vocabularies as salient features for gender and age classification. Algorithm evaluations on three publicly available databases (the FERET database, the LFW database and the FRCVv2 database) demonstrate the superior performance of the proposed method in both laboratory and unconstrained environments. In order to achieve enhanced robustness, a two-source photometric stereo method has been introduced to recover surface normals such that more invariant 3D facia features become available that can further boost classification accuracy and robustness. A 2D+3D imaging system has been designed for construction of a self-collected dataset including 2D and 3D facial data. Experiments show that utilisation of 3D facial features can increase gender classification rate by up to 6% (based on the self-collected dataset), and can increase age classification rate by up to 12% (based on the Photoface database). Finally, two case study HCI systems, a gaze gesture based map browser and a directed advertising billboard, have been designed by adopting all the proposed algorithms as well as the fully compatible imaging system. Benefits from the proposed algorithms naturally ensure that the case study systems can possess high robustness to head pose variation and illumination variation; and can achieve excellent real-time performance. Overall, the proposed HCI strategy enabled by reliably recognised facial cues can serve to spawn a wide array of innovative systems and to bring HCI to a more natural and intelligent state
Learning from Millions of 3D Scans for Large-scale 3D Face Recognition
Deep networks trained on millions of facial images are believed to be closely
approaching human-level performance in face recognition. However, open world
face recognition still remains a challenge. Although, 3D face recognition has
an inherent edge over its 2D counterpart, it has not benefited from the recent
developments in deep learning due to the unavailability of large training as
well as large test datasets. Recognition accuracies have already saturated on
existing 3D face datasets due to their small gallery sizes. Unlike 2D
photographs, 3D facial scans cannot be sourced from the web causing a
bottleneck in the development of deep 3D face recognition networks and
datasets. In this backdrop, we propose a method for generating a large corpus
of labeled 3D face identities and their multiple instances for training and a
protocol for merging the most challenging existing 3D datasets for testing. We
also propose the first deep CNN model designed specifically for 3D face
recognition and trained on 3.1 Million 3D facial scans of 100K identities. Our
test dataset comprises 1,853 identities with a single 3D scan in the gallery
and another 31K scans as probes, which is several orders of magnitude larger
than existing ones. Without fine tuning on this dataset, our network already
outperforms state of the art face recognition by over 10%. We fine tune our
network on the gallery set to perform end-to-end large scale 3D face
recognition which further improves accuracy. Finally, we show the efficacy of
our method for the open world face recognition problem.Comment: 11 page
Using 3D Representations of the Nasal Region for Improved Landmarking and Expression Robust Recognition
This paper investigates the performance of different representations of 3D human nasal region for expression robust recognition. By performing evaluations on the depth and surface normal components of the facial surface, the nasal region is shown to be relatively consistent over various expressions, providing motivation for using the nasal region as a biometric. A new efficient landmarking algorithm that thresholds the local surface normal components is proposed and demonstrated to produce an improved recognition performance for nasal curves from both the depth and surface normal components. The use of the Shape Index for feature extraction is also investigated and shown to produce a good recognition performance
MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
In this work we propose a novel model-based deep convolutional autoencoder
that addresses the highly challenging problem of reconstructing a 3D human face
from a single in-the-wild color image. To this end, we combine a convolutional
encoder network with an expert-designed generative model that serves as
decoder. The core innovation is our new differentiable parametric decoder that
encapsulates image formation analytically based on a generative model. Our
decoder takes as input a code vector with exactly defined semantic meaning that
encodes detailed face pose, shape, expression, skin reflectance and scene
illumination. Due to this new way of combining CNN-based with model-based face
reconstruction, the CNN-based encoder learns to extract semantically meaningful
parameters from a single monocular input image. For the first time, a CNN
encoder and an expert-designed generative model can be trained end-to-end in an
unsupervised manner, which renders training on very large (unlabeled) real
world data feasible. The obtained reconstructions compare favorably to current
state-of-the-art approaches in terms of quality and richness of representation.Comment: International Conference on Computer Vision (ICCV) 2017 (Oral), 13
page
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image
To facilitate the analysis of human actions, interactions and emotions, we
compute a 3D model of human body pose, hand pose, and facial expression from a
single monocular image. To achieve this, we use thousands of 3D scans to train
a new, unified, 3D model of the human body, SMPL-X, that extends SMPL with
fully articulated hands and an expressive face. Learning to regress the
parameters of SMPL-X directly from images is challenging without paired images
and 3D ground truth. Consequently, we follow the approach of SMPLify, which
estimates 2D features and then optimizes model parameters to fit the features.
We improve on SMPLify in several significant ways: (1) we detect 2D features
corresponding to the face, hands, and feet and fit the full SMPL-X model to
these; (2) we train a new neural network pose prior using a large MoCap
dataset; (3) we define a new interpenetration penalty that is both fast and
accurate; (4) we automatically detect gender and the appropriate body models
(male, female, or neutral); (5) our PyTorch implementation achieves a speedup
of more than 8x over Chumpy. We use the new method, SMPLify-X, to fit SMPL-X to
both controlled images and images in the wild. We evaluate 3D accuracy on a new
curated dataset comprising 100 images with pseudo ground-truth. This is a step
towards automatic expressive human capture from monocular RGB data. The models,
code, and data are available for research purposes at
https://smpl-x.is.tue.mpg.de.Comment: To appear in CVPR 201
- …