4,511 research outputs found
Adaptive face modelling for reconstructing 3D face shapes from single 2D images
Example-based statistical face models using principle component analysis (PCA) have been widely deployed for three-dimensional (3D) face reconstruction and face recognition. The two common factors that are generally concerned with such models are the size of the training dataset and the selection of different examples in the training set. The representational power (RP) of an example-based model is its capability to depict a new 3D face for a given 2D face image. The RP of the model can be increased by correspondingly increasing the number of training samples. In this contribution, a novel approach is proposed to increase the RP of the 3D face reconstruction model by deforming a set of examples in the training dataset. A PCA-based 3D face model is adapted for each new near frontal input face image to reconstruct the 3D face shape. Further an extended Tikhonov regularisation method has been
Reconstructing 3D face shapes from single 2D images using an adaptive deformation model
The Representational Power (RP) of an example-based model is its capability to depict a new 3D face for a given 2D face image. In this contribution, a novel approach is proposed to increase the RP of the 3D reconstruction PCA-based model by deforming a set of examples in the training dataset. By adding these deformed samples together with the original training samples we gain more RP. A 3D PCA-based model is adapted for each new input face image by deforming 3D faces in the training data set. This adapted model is used to reconstruct the 3D face shape for the given input 2D near frontal face image. Our experimental results justify that the proposed adaptive model considerably improves the RP of the conventional PCA-based model
3D Face Reconstruction from Light Field Images: A Model-free Approach
Reconstructing 3D facial geometry from a single RGB image has recently
instigated wide research interest. However, it is still an ill-posed problem
and most methods rely on prior models hence undermining the accuracy of the
recovered 3D faces. In this paper, we exploit the Epipolar Plane Images (EPI)
obtained from light field cameras and learn CNN models that recover horizontal
and vertical 3D facial curves from the respective horizontal and vertical EPIs.
Our 3D face reconstruction network (FaceLFnet) comprises a densely connected
architecture to learn accurate 3D facial curves from low resolution EPIs. To
train the proposed FaceLFnets from scratch, we synthesize photo-realistic light
field images from 3D facial scans. The curve by curve 3D face estimation
approach allows the networks to learn from only 14K images of 80 identities,
which still comprises over 11 Million EPIs/curves. The estimated facial curves
are merged into a single pointcloud to which a surface is fitted to get the
final 3D face. Our method is model-free, requires only a few training samples
to learn FaceLFnet and can reconstruct 3D faces with high accuracy from single
light field images under varying poses, expressions and lighting conditions.
Comparison on the BU-3DFE and BU-4DFE datasets show that our method reduces
reconstruction errors by over 20% compared to recent state of the art
A 3D Face Modelling Approach for Pose-Invariant Face Recognition in a Human-Robot Environment
Face analysis techniques have become a crucial component of human-machine
interaction in the fields of assistive and humanoid robotics. However, the
variations in head-pose that arise naturally in these environments are still a
great challenge. In this paper, we present a real-time capable 3D face
modelling framework for 2D in-the-wild images that is applicable for robotics.
The fitting of the 3D Morphable Model is based exclusively on automatically
detected landmarks. After fitting, the face can be corrected in pose and
transformed back to a frontal 2D representation that is more suitable for face
recognition. We conduct face recognition experiments with non-frontal images
from the MUCT database and uncontrolled, in the wild images from the PaSC
database, the most challenging face recognition database to date, showing an
improved performance. Finally, we present our SCITOS G5 robot system, which
incorporates our framework as a means of image pre-processing for face
analysis
Modeling Caricature Expressions by 3D Blendshape and Dynamic Texture
The problem of deforming an artist-drawn caricature according to a given
normal face expression is of interest in applications such as social media,
animation and entertainment. This paper presents a solution to the problem,
with an emphasis on enhancing the ability to create desired expressions and
meanwhile preserve the identity exaggeration style of the caricature, which
imposes challenges due to the complicated nature of caricatures. The key of our
solution is a novel method to model caricature expression, which extends
traditional 3DMM representation to caricature domain. The method consists of
shape modelling and texture generation for caricatures. Geometric optimization
is developed to create identity-preserving blendshapes for reconstructing
accurate and stable geometric shape, and a conditional generative adversarial
network (cGAN) is designed for generating dynamic textures under target
expressions. The combination of both shape and texture components makes the
non-trivial expressions of a caricature be effectively defined by the extension
of the popular 3DMM representation and a caricature can thus be flexibly
deformed into arbitrary expressions with good results visually in both shape
and color spaces. The experiments demonstrate the effectiveness of the proposed
method.Comment: Accepted by the 28th ACM International Conference on Multimedia (ACM
MM 2020
3D GANs and Latent Space: A comprehensive survey
Generative Adversarial Networks (GANs) have emerged as a significant player
in generative modeling by mapping lower-dimensional random noise to
higher-dimensional spaces. These networks have been used to generate
high-resolution images and 3D objects. The efficient modeling of 3D objects and
human faces is crucial in the development process of 3D graphical environments
such as games or simulations. 3D GANs are a new type of generative model used
for 3D reconstruction, point cloud reconstruction, and 3D semantic scene
completion. The choice of distribution for noise is critical as it represents
the latent space. Understanding a GAN's latent space is essential for
fine-tuning the generated samples, as demonstrated by the morphing of
semantically meaningful parts of images. In this work, we explore the latent
space and 3D GANs, examine several GAN variants and training methods to gain
insights into improving 3D GAN training, and suggest potential future
directions for further research
Semantic 3D Reconstruction with Finite Element Bases
We propose a novel framework for the discretisation of multi-label problems
on arbitrary, continuous domains. Our work bridges the gap between general FEM
discretisations, and labeling problems that arise in a variety of computer
vision tasks, including for instance those derived from the generalised Potts
model. Starting from the popular formulation of labeling as a convex relaxation
by functional lifting, we show that FEM discretisation is valid for the most
general case, where the regulariser is anisotropic and non-metric. While our
findings are generic and applicable to different vision problems, we
demonstrate their practical implementation in the context of semantic 3D
reconstruction, where such regularisers have proved particularly beneficial.
The proposed FEM approach leads to a smaller memory footprint as well as faster
computation, and it constitutes a very simple way to enable variable, adaptive
resolution within the same model
Towards a complete 3D morphable model of the human head
Three-dimensional Morphable Models (3DMMs) are powerful statistical tools for
representing the 3D shapes and textures of an object class. Here we present the
most complete 3DMM of the human head to date that includes face, cranium, ears,
eyes, teeth and tongue. To achieve this, we propose two methods for combining
existing 3DMMs of different overlapping head parts: i. use a regressor to
complete missing parts of one model using the other, ii. use the Gaussian
Process framework to blend covariance matrices from multiple models. Thus we
build a new combined face-and-head shape model that blends the variability and
facial detail of an existing face model (the LSFM) with the full head modelling
capability of an existing head model (the LYHM). Then we construct and fuse a
highly-detailed ear model to extend the variation of the ear shape. Eye and eye
region models are incorporated into the head model, along with basic models of
the teeth, tongue and inner mouth cavity. The new model achieves
state-of-the-art performance. We use our model to reconstruct full head
representations from single, unconstrained images allowing us to parameterize
craniofacial shape and texture, along with the ear shape, eye gaze and eye
color.Comment: 18 pages, 18 figures, submitted to Transactions on Pattern Analysis
and Machine Intelligence (TPAMI) on the 9th of October as an extension paper
of the original oral CVPR paper : arXiv:1903.0378
07171 Abstracts Collection -- Visual Computing -- Convergence of Computer Graphics and Computer Vision
From 22.04. to 27.04.2007, the Dagstuhl Seminar 07171 ``Visual Computing - Convergence of Computer Graphics and Computer Vision\u27\u27 was held
in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
- …