10,765 research outputs found
Analysis of 3D Face Reconstruction
This thesis investigates the long standing problem of 3D reconstruction from a single 2D face
image. Face reconstruction from a single 2D face image is an ill posed problem involving estimation of the intrinsic and the extrinsic camera parameters, light parameters, shape parameters
and the texture parameters. The proposed approach has many potential applications in the
law enforcement, surveillance, medicine, computer games and the entertainment industries.
This problem is addressed using an analysis by synthesis framework by reconstructing a 3D
face model from identity photographs. The identity photographs are a widely used medium for
face identi cation and can be found on identity cards and passports.
The novel contribution of this thesis is a new technique for creating 3D face models from a single
2D face image. The proposed method uses the improved dense 3D correspondence obtained
using rigid and non-rigid registration techniques. The existing reconstruction methods use the
optical
ow method for establishing 3D correspondence. The resulting 3D face database is used
to create a statistical shape model.
The existing reconstruction algorithms recover shape by optimizing over all the parameters
simultaneously. The proposed algorithm simplifies the reconstruction problem by using a step
wise approach thus reducing the dimension of the parameter space and simplifying the opti-
mization problem. In the alignment step, a generic 3D face is aligned with the given 2D face
image by using anatomical landmarks. The texture is then warped onto the 3D model by using
the spatial alignment obtained previously. The 3D shape is then recovered by optimizing over
the shape parameters while matching a texture mapped model to the target image.
There are a number of advantages of this approach. Firstly, it simpli es the optimization requirements and makes the optimization more robust. Second, there is no need to accurately
recover the illumination parameters. Thirdly, there is no need for recovering the texture parameters by using a texture synthesis approach. Fourthly, quantitative analysis is used for
improving the quality of reconstruction by improving the cost function. Previous methods use
qualitative methods such as visual analysis, and face recognition rates for evaluating reconstruction accuracy.
The improvement in the performance of the cost function occurs as a result of improvement
in the feature space comprising the landmark and intensity features. Previously, the feature
space has not been evaluated with respect to reconstruction accuracy thus leading to inaccurate
assumptions about its behaviour.
The proposed approach simpli es the reconstruction problem by using only identity images,
rather than placing eff ort on overcoming the pose, illumination and expression (PIE) variations.
This makes sense, as frontal face images under standard illumination conditions are widely
available and could be utilized for accurate reconstruction. The reconstructed 3D models with
texture can then be used for overcoming the PIE variations
Evaluating 3D human face reconstruction from a frontal 2D image, focusing on facial regions associated with foetal alcohol syndrome
Foetal alcohol syndrome (FAS) is a preventable condition caused by maternal alcohol consumption during pregnancy. The FAS facial phenotype is an important factor for diagnosis, alongside central nervous system impairments and growth abnormalities. Current methods for analysing the FAS facial phenotype rely on 3D facial image data, obtained from costly and complex surface scanning devices. An alternative is to use 2D images, which are easy to acquire with a digital camera or smart phone. However, 2D images lack the geometric accuracy required for accurate facial shape analysis. Our research offers a solution through the reconstruction of 3D human faces from single or multiple 2D images. We have developed a framework for evaluating 3D human face reconstruction from a single-input 2D image using a 3D face model for potential use in FAS assessment. We first built a generative morphable model of the face from a database of registered 3D face scans with diverse skin tones. Then we applied this model to reconstruct 3D face surfaces from single frontal images using a model-driven sampling algorithm. The accuracy of the predicted 3D face shapes was evaluated in terms of surface reconstruction error and the accuracy of FAS-relevant landmark locations and distances. Results show an average root mean square error of 2.62 mm. Our framework has the potential to estimate 3D landmark positions for parts of the face associated with the FAS facial phenotype. Future work aims to improve on the accuracy and adapt the approach for use in clinical settings.
Significance:
Our study presents a framework for constructing and evaluating a 3D face model from 2D face scans and evaluating the accuracy of 3D face shape predictions from single images. The results indicate low generalisation error and comparability to other studies. The reconstructions also provide insight into specific regions of the face relevant to FAS diagnosis. The proposed approach presents a potential cost-effective and easily accessible imaging tool for FAS screening, yet its clinical application needs further research
Query Based Face Retrieval From Automatic Reconstructed Images based on 3D Frontal View - Using EICA
Face recognition systems have been playing a vital role from several decades. Thus, various algorithms for face recognition are developed for various applications like 2018;person identification2019;, 2019;human computer interaction2019;, 2019;security systems2019;. A framework for face recognition with different poses through face reconstruction is being proposed in this paper. In the present work, the system is trained with only a single frontal face with normal illumination and expression. Instead of capturing the image of a person in different poses using camera or video, different views of the 3D face are reconstructed with the help of a 3D face shape model. This automatically increases the size of the training set. This approach outperforms the present 2D techniques with higher recognition rate. This paper refers to the face detection and recognition approach, which primarily focuses on Enhanced Independent Component Analysis(EICA) for the Query Based Face Retrieval and the implementation is done in Scilab. This method detects the static face(cropped photo as input) and also faces from group picture, and these faces are reconstructed using 3D face shape model. Image preprocessing is used inorder to reduce the error rate when there are illuminated images. Scilab2019;s SIVP toolbox is used for image analysis
Fast Landmark Localization with 3D Component Reconstruction and CNN for Cross-Pose Recognition
Two approaches are proposed for cross-pose face recognition, one is based on
the 3D reconstruction of facial components and the other is based on the deep
Convolutional Neural Network (CNN). Unlike most 3D approaches that consider
holistic faces, the proposed approach considers 3D facial components. It
segments a 2D gallery face into components, reconstructs the 3D surface for
each component, and recognizes a probe face by component features. The
segmentation is based on the landmarks located by a hierarchical algorithm that
combines the Faster R-CNN for face detection and the Reduced Tree Structured
Model for landmark localization. The core part of the CNN-based approach is a
revised VGG network. We study the performances with different settings on the
training set, including the synthesized data from 3D reconstruction, the
real-life data from an in-the-wild database, and both types of data combined.
We investigate the performances of the network when it is employed as a
classifier or designed as a feature extractor. The two recognition approaches
and the fast landmark localization are evaluated in extensive experiments, and
compared to stateof-the-art methods to demonstrate their efficacy.Comment: 14 pages, 12 figures, 4 table
UV-GAN: Adversarial Facial UV Map Completion for Pose-invariant Face Recognition
Recently proposed robust 3D face alignment methods establish either dense or
sparse correspondence between a 3D face model and a 2D facial image. The use of
these methods presents new challenges as well as opportunities for facial
texture analysis. In particular, by sampling the image using the fitted model,
a facial UV can be created. Unfortunately, due to self-occlusion, such a UV map
is always incomplete. In this paper, we propose a framework for training Deep
Convolutional Neural Network (DCNN) to complete the facial UV map extracted
from in-the-wild images. To this end, we first gather complete UV maps by
fitting a 3D Morphable Model (3DMM) to various multiview image and video
datasets, as well as leveraging on a new 3D dataset with over 3,000 identities.
Second, we devise a meticulously designed architecture that combines local and
global adversarial DCNNs to learn an identity-preserving facial UV completion
model. We demonstrate that by attaching the completed UV to the fitted mesh and
generating instances of arbitrary poses, we can increase pose variations for
training deep face recognition/verification models, and minimise pose
discrepancy during testing, which lead to better performance. Experiments on
both controlled and in-the-wild UV datasets prove the effectiveness of our
adversarial UV completion model. We achieve state-of-the-art verification
accuracy, , under the CFP frontal-profile protocol only by combining
pose augmentation during training and pose discrepancy reduction during
testing. We will release the first in-the-wild UV dataset (we refer as WildUV)
that comprises of complete facial UV maps from 1,892 identities for research
purposes
- …