864 research outputs found
Statistical shape analysis for bio-structures : local shape modelling, techniques and applications
A Statistical Shape Model (SSM) is a statistical representation of a shape obtained
from data to study variation in shapes. Work on shape modelling is constrained by
many unsolved problems, for instance, difficulties in modelling local versus global
variation. SSM have been successfully applied in medical image applications such
as the analysis of brain anatomy. Since brain structure is so complex and varies
across subjects, methods to identify morphological variability can be useful for
diagnosis and treatment.
The main objective of this research is to generate and develop a statistical shape
model to analyse local variation in shapes. Within this particular context, this
work addresses the question of what are the local elements that need to be identified for effective shape analysis. Here, the proposed method is based on a Point
Distribution Model and uses a combination of other well known techniques: Fractal
analysis; Markov Chain Monte Carlo methods; and the Curvature Scale Space
representation for the problem of contour localisation. Similarly, Diffusion Maps
are employed as a spectral shape clustering tool to identify sets of local partitions
useful in the shape analysis. Additionally, a novel Hierarchical Shape Analysis
method based on the Gaussian and Laplacian pyramids is explained and used to
compare the featured Local Shape Model.
Experimental results on a number of real contours such as animal, leaf and brain
white matter outlines have been shown to demonstrate the effectiveness of the
proposed model. These results show that local shape models are efficient in modelling
the statistical variation of shape of biological structures. Particularly, the
development of this model provides an approach to the analysis of brain images
and brain morphometrics. Likewise, the model can be adapted to the problem of
content based image retrieval, where global and local shape similarity needs to be
measured
2D-to-3D facial expression transfer
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape --obtained from standard factorization approaches over the input video-- using a triangular mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops.Peer ReviewedPostprint (author's final draft
Linear and Nonlinear Generative Probabilistic Class Models for Shape Contours
We introduce a robust probabilistic approach
to modeling shape contours based on a low-
dimensional, nonlinear latent variable model.
In contrast to existing techniques that use
objective functions in data space without ex-
plicit noise models, we are able to extract
complex shape variation from noisy data.
Most approaches to learning shape models
slide observed data points around fixed con-
tours and hence, require a correctly labeled
‘reference shape’ to prevent degenerate so-
lutions. In our method, unobserved curves
are reparameterized to explain the fixed data
points, so this problem does not arise. The
proposed algorithms are suitable for use with
arbitrary basis functions and are applicable
to both open and closed shapes; their effec-
tiveness is demonstrated through illustrative
examples, quantitative assessment on bench-
mark data sets and a visualization task
Recommended from our members
Facial feature localization using highly flexible yet sufficiently strict shape models
textAccurate and efficient localization of facial features is a crucial first step in many face-related computer vision tasks. Some of these tasks include, but not limited to: identity recognition, expression recognition, and head-pose estimation. Most effort in the field has been exerted towards developing better ways of modeling prior appearance knowledge and image observations. Modeling prior shape knowledge, on the other hand, has not been explored as much. In this dissertation I primarily focus on the limitations of the existing methods in terms of modeling the prior shape knowledge. I first introduce a new pose-constrained shape model. I describe my shape model as being "highly flexible yet sufficiently strict". Existing pose-constrained shape models are either too strict, and have questionable generalization power, or they are too loose, and have questionable localization accuracies. My model tries to find a good middle-ground by learning which shape constraints are more "informative" and should be kept, and which ones are not-so-important and may be omitted. I build my pose-constrained facial feature localization approach on this new shape model using a probabilistic graphical model framework. Within this framework, observed and unobserved variables are defined as the local image observations, and the feature locations, respectively. Feature localization, or "probabilistic inference", is then achieved by nonparametric belief propagation. I show that this approach outperforms other popular pose-constrained methods through qualitative and quantitative experiments. Next, I expand my pose-constrained localization approach to unconstrained setting using a multi-model strategy. While doing so, once again I identify and address the two key limitations of existing multi-model methods: 1) semantically and manually defining the models or "guiding" their generation, and 2) not having efficient and effective model selection strategies. First, I introduce an approach based on unsupervised clustering where the models are automatically learned from training data. Then, I complement this approach with an efficient and effective model selection strategy, which is based on a multi-class naive Bayesian classifier. This way, my method can have many more models, each with a higher level of expressive power, and consequently, provides a more effective partitioning of the face image space. This approach is validated through extensive experiments and comparisons with state-of-the-art methods on state-of-the-art datasets. In the last part of this dissertation I discuss a particular application of the previously introduced techniques; facial feature localization in unconstrained videos. I improve the frame-by-frame localization results, by estimating the actual head-movement from a sequence of noisy head-pose estimates, and then using this information for detecting and fixing the localization failures.Electrical and Computer Engineerin
- …