1,444 research outputs found
Gaussian Process Morphable Models
Statistical shape models (SSMs) represent a class of shapes as a normal
distribution of point variations, whose parameters are estimated from example
shapes. Principal component analysis (PCA) is applied to obtain a
low-dimensional representation of the shape variation in terms of the leading
principal components. In this paper, we propose a generalization of SSMs,
called Gaussian Process Morphable Models (GPMMs). We model the shape variations
with a Gaussian process, which we represent using the leading components of its
Karhunen-Loeve expansion. To compute the expansion, we make use of an
approximation scheme based on the Nystrom method. The resulting model can be
seen as a continuous analogon of an SSM. However, while for SSMs the shape
variation is restricted to the span of the example data, with GPMMs we can
define the shape variation using any Gaussian process. For example, we can
build shape models that correspond to classical spline models, and thus do not
require any example data. Furthermore, Gaussian processes make it possible to
combine different models. For example, an SSM can be extended with a spline
model, to obtain a model that incorporates learned shape characteristics, but
is flexible enough to explain shapes that cannot be represented by the SSM. We
introduce a simple algorithm for fitting a GPMM to a surface or image. This
results in a non-rigid registration approach, whose regularization properties
are defined by a GPMM. We show how we can obtain different registration
schemes,including methods for multi-scale, spatially-varying or hybrid
registration, by constructing an appropriate GPMM. As our approach strictly
separates modelling from the fitting process, this is all achieved without
changes to the fitting algorithm. We show the applicability and versatility of
GPMMs on a clinical use case, where the goal is the model-based segmentation of
3D forearm images
Morphable Face Models - An Open Framework
In this paper, we present a novel open-source pipeline for face registration
based on Gaussian processes as well as an application to face image analysis.
Non-rigid registration of faces is significant for many applications in
computer vision, such as the construction of 3D Morphable face models (3DMMs).
Gaussian Process Morphable Models (GPMMs) unify a variety of non-rigid
deformation models with B-splines and PCA models as examples. GPMM separate
problem specific requirements from the registration algorithm by incorporating
domain-specific adaptions as a prior model. The novelties of this paper are the
following: (i) We present a strategy and modeling technique for face
registration that considers symmetry, multi-scale and spatially-varying
details. The registration is applied to neutral faces and facial expressions.
(ii) We release an open-source software framework for registration and
model-building, demonstrated on the publicly available BU3D-FE database. The
released pipeline also contains an implementation of an Analysis-by-Synthesis
model adaption of 2D face images, tested on the Multi-PIE and LFW database.
This enables the community to reproduce, evaluate and compare the individual
steps of registration to model-building and 3D/2D model fitting. (iii) Along
with the framework release, we publish a new version of the Basel Face Model
(BFM-2017) with an improved age distribution and an additional facial
expression model
Error-Controlled Model Approximation for Gaussian Process Morphable Models
Gaussian Process Morphable Models (GPMMs) unify a variety of non-rigid deformation models for surface and image registration. Deformation models, such as B-splines, radial basis functions, and PCA models are defined as a probability distribution using a Gaussian process. The method depends heavily on the low-rank approximation of the Gaussian process, which is mandatory to obtain a parametric representation of the model. In this article, we propose the use of the pivoted Cholesky decomposition for this task, which has the following advantages: (1) Compared to the current state of the art used in GPMMs, it provides a fully controllable approximation error. The algorithm greedily computes new basis functions until the user-defined approximation accuracy is reached. (2) Unlike the currently used approach, this method can be used in a black-box-like scenario, whereas the method automatically chooses the amount of basis functions for a given model and accuracy. (3) We propose the Newton basis as an alternative basis for GPMMs. The proposed basis does not need an SVD computation and can be iteratively refined. We show that the proposed basis functions achieve competitive registration results while providing the mentioned advantages for its computation
Fitting 3D Morphable Models using Local Features
In this paper, we propose a novel fitting method that uses local image
features to fit a 3D Morphable Model to 2D images. To overcome the obstacle of
optimising a cost function that contains a non-differentiable feature
extraction operator, we use a learning-based cascaded regression method that
learns the gradient direction from data. The method allows to simultaneously
solve for shape and pose parameters. Our method is thoroughly evaluated on
Morphable Model generated data and first results on real data are presented.
Compared to traditional fitting methods, which use simple raw features like
pixel colour or edge maps, local features have been shown to be much more
robust against variations in imaging conditions. Our approach is unique in that
we are the first to use local features to fit a Morphable Model.
Because of the speed of our method, it is applicable for realtime
applications. Our cascaded regression framework is available as an open source
library (https://github.com/patrikhuber).Comment: Submitted to ICIP 2015; 4 pages, 4 figure
Combining and Steganography of 3D Face Textures
One of the serious issues in communication between people is hiding
information from others, and the best way for this, is deceiving them. Since
nowadays face images are mostly used in three dimensional format, in this paper
we are going to steganography 3D face images, detecting which by curious people
will be impossible. As in detecting face only its texture is important, we
separate texture from shape matrices, for eliminating half of the extra
information, steganography is done only for face texture, and for
reconstructing 3D face, we can use any other shape. Moreover, we will indicate
that, by using two textures, how two 3D faces can be combined. For a complete
description of the process, first, 2D faces are used as an input for building
3D faces, and then 3D textures are hidden within other images.Comment: 6 pages, 10 figures, 16 equations, 5 section
Using 3D morphable models for face recognition in video
The 3D Morphable Face Model (3DMM) has been used for over a decade for creating 3D models from single images of faces. This model is based on a PCA model of the 3D shape and texture generated from a limited number of 3D scans. The goal of fitting a 3DMM to an image is to find the model coefficients, the lighting and other imaging variables from which we can remodel that image as accurately as possible. The model coefficients consist of texture and of shape descriptors, and can without further processing be used in verification and recognition experiments. Until now little research has been performed into the influence of the diverse parameters of the 3DMM on the recognition performance. In this paper we will introduce a Bayesian-based method for texture backmapping from multiple images. Using the information from multiple (non-frontal) views we construct a frontal view which can be used as input to 2D face recognition software. We also show how the number of triangles used in the fitting proces influences the recognition performance using the shape descriptors. The verification results of the 3DMM are compared to state-of-the-art 2D face recognition software on the MultiPIE dataset. The 2D FR software outperforms the Morphable Model, but the Morphable Model can be useful as a preprocesser to synthesize a frontal view from a non-frontal view and also combine images with multiple views to a single frontal view. We show results for this preprocessing technique by using an average face shape, a fitted face shape, with a MM texture, with the original texture and with a hybrid texture. The preprocessor has improved the verification results significantly on the dataset
Fitting a 3D Morphable Model to Edges: A Comparison Between Hard and Soft Correspondences
We propose a fully automatic method for fitting a 3D morphable model to
single face images in arbitrary pose and lighting. Our approach relies on
geometric features (edges and landmarks) and, inspired by the iterated closest
point algorithm, is based on computing hard correspondences between model
vertices and edge pixels. We demonstrate that this is superior to previous work
that uses soft correspondences to form an edge-derived cost surface that is
minimised by nonlinear optimisation.Comment: To appear in ACCV 2016 Workshop on Facial Informatic
Generating 3D faces using Convolutional Mesh Autoencoders
Learned 3D representations of human faces are useful for computer vision
problems such as 3D face tracking and reconstruction from images, as well as
graphics applications such as character generation and animation. Traditional
models learn a latent representation of a face using linear subspaces or
higher-order tensor generalizations. Due to this linearity, they can not
capture extreme deformations and non-linear expressions. To address this, we
introduce a versatile model that learns a non-linear representation of a face
using spectral convolutions on a mesh surface. We introduce mesh sampling
operations that enable a hierarchical mesh representation that captures
non-linear variations in shape and expression at multiple scales within the
model. In a variational setting, our model samples diverse realistic 3D faces
from a multivariate Gaussian distribution. Our training data consists of 20,466
meshes of extreme expressions captured over 12 different subjects. Despite
limited training data, our trained model outperforms state-of-the-art face
models with 50% lower reconstruction error, while using 75% fewer parameters.
We also show that, replacing the expression space of an existing
state-of-the-art face model with our autoencoder, achieves a lower
reconstruction error. Our data, model and code are available at
http://github.com/anuragranj/com
- …