8,463 research outputs found
Recovering facial shape using a statistical model of surface normal direction
In this paper, we show how a statistical model of facial shape can be embedded within a shape-from-shading algorithm. We describe how facial shape can be captured using a statistical model of variations in surface normal direction. To construct this model, we make use of the azimuthal equidistant projection to map the distribution of surface normals from the polar representation on a unit sphere to Cartesian points on a local tangent plane. The distribution of surface normal directions is captured using the covariance matrix for the projected point positions. The eigenvectors of the covariance matrix define the modes of shape-variation in the fields of transformed surface normals. We show how this model can be trained using surface normal data acquired from range images and how to fit the model to intensity images of faces using constraints on the surface normal direction provided by Lambert's law. We demonstrate that the combination of a global statistical constraint and local irradiance constraint yields an efficient and accurate approach to facial shape recovery and is capable of recovering fine local surface details. We assess the accuracy of the technique on a variety of images with ground truth and real-world images
Tex2Shape: Detailed Full Human Body Geometry From a Single Image
We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method
Tex2Shape: Detailed Full Human Body Geometry From a Single Image
We present a simple yet effective method to infer detailed full human body
shape from only a single photograph. Our model can infer full-body shape
including face, hair, and clothing including wrinkles at interactive
frame-rates. Results feature details even on parts that are occluded in the
input image. Our main idea is to turn shape regression into an aligned
image-to-image translation problem. The input to our method is a partial
texture map of the visible region obtained from off-the-shelf methods. From a
partial texture, we estimate detailed normal and vector displacement maps,
which can be applied to a low-resolution smooth body model to add detail and
clothing. Despite being trained purely with synthetic data, our model
generalizes well to real-world photographs. Numerous results demonstrate the
versatility and robustness of our method
Reconstruction of 3D faces by shape estimation and texture interpolation
This paper aims to address the ill-posed problem of reconstructing 3D faces from single 2D face images. An
extended Tikhonov regularization method is connected with the standard 3D morphable model in order to
reconstruct the 3D face shapes from a small set of 2D facial points. Further, by interpolating the input 2D
texture with the model texture and warping the interpolated texture to the reconstructed face shapes, 3D face
reconstruction is achieved. For the texture warping, the 2D face deformation has been learned from the model
texture using a set of facial landmarks. Our experimental results justify the robustness of the proposed approach
with respect to the reconstruction of realistic 3D face shapes
Analysis of 3D Face Reconstruction
This thesis investigates the long standing problem of 3D reconstruction from a single 2D face
image. Face reconstruction from a single 2D face image is an ill posed problem involving estimation of the intrinsic and the extrinsic camera parameters, light parameters, shape parameters
and the texture parameters. The proposed approach has many potential applications in the
law enforcement, surveillance, medicine, computer games and the entertainment industries.
This problem is addressed using an analysis by synthesis framework by reconstructing a 3D
face model from identity photographs. The identity photographs are a widely used medium for
face identi cation and can be found on identity cards and passports.
The novel contribution of this thesis is a new technique for creating 3D face models from a single
2D face image. The proposed method uses the improved dense 3D correspondence obtained
using rigid and non-rigid registration techniques. The existing reconstruction methods use the
optical
ow method for establishing 3D correspondence. The resulting 3D face database is used
to create a statistical shape model.
The existing reconstruction algorithms recover shape by optimizing over all the parameters
simultaneously. The proposed algorithm simplifies the reconstruction problem by using a step
wise approach thus reducing the dimension of the parameter space and simplifying the opti-
mization problem. In the alignment step, a generic 3D face is aligned with the given 2D face
image by using anatomical landmarks. The texture is then warped onto the 3D model by using
the spatial alignment obtained previously. The 3D shape is then recovered by optimizing over
the shape parameters while matching a texture mapped model to the target image.
There are a number of advantages of this approach. Firstly, it simpli es the optimization requirements and makes the optimization more robust. Second, there is no need to accurately
recover the illumination parameters. Thirdly, there is no need for recovering the texture parameters by using a texture synthesis approach. Fourthly, quantitative analysis is used for
improving the quality of reconstruction by improving the cost function. Previous methods use
qualitative methods such as visual analysis, and face recognition rates for evaluating reconstruction accuracy.
The improvement in the performance of the cost function occurs as a result of improvement
in the feature space comprising the landmark and intensity features. Previously, the feature
space has not been evaluated with respect to reconstruction accuracy thus leading to inaccurate
assumptions about its behaviour.
The proposed approach simpli es the reconstruction problem by using only identity images,
rather than placing eff ort on overcoming the pose, illumination and expression (PIE) variations.
This makes sense, as frontal face images under standard illumination conditions are widely
available and could be utilized for accurate reconstruction. The reconstructed 3D models with
texture can then be used for overcoming the PIE variations
- …