184,334 research outputs found
Posterior shape models
We present a method to compute the conditional distribution of a statistical shape model given partial data. The result is a "posterior shape model", which is again a statistical shape model of the same form as the original model. This allows its direct use in the variety of algorithms that include prior knowledge about the variability of a class of shapes with a statistical shape model. Posterior shape models then provide a statistically sound yet easy method to integrate partial data into these algorithms. Usually, shape models represent a complete organ, for instance in our experiments the femur bone, modeled by a multivariate normal distribution. But because in many application certain parts of the shape are known a priori, it is of great interest to model the posterior distribution of the whole shape given the known parts. These could be isolated landmark points or larger portions of the shape, like the healthy part of a pathological or damaged organ. However, because for most shape models the dimensionality of the data is much higher than the number of examples, the normal distribution is singular, and the conditional distribution not readily available. In this paper, we present two main contributions: First, we show how the posterior model can be efficiently computed as a statistical shape model in standard form and used in any shape model algorithm. We complement this paper with a freely available implementation of our algorithms. Second, we show that most common approaches put forth in the literature to overcome this are equivalent to probabilistic principal component analysis (PPCA), and Gaussian Process regression. To illustrate the use of posterior shape models, we apply them on two problems from medical image analysis: model-based image segmentation incorporating prior knowledge from landmarks, and the prediction of anatomically correct knee shapes for trochlear dysplasia patients, which constitutes a novel medical application. Our experiments confirm that the use of conditional shape models for image segmentation improves the overall segmentation accuracy and robustness
Asymptotic Properties of Approximate Bayesian Computation
Approximate Bayesian computation allows for statistical analysis in models
with intractable likelihoods. In this paper we consider the asymptotic
behaviour of the posterior distribution obtained by this method. We give
general results on the rate at which the posterior distribution concentrates on
sets containing the true parameter, its limiting shape, and the asymptotic
distribution of the posterior mean. These results hold under given rates for
the tolerance used within the method, mild regularity conditions on the summary
statistics, and a condition linked to identification of the true parameters.
Implications for practitioners are discussed.Comment: This 31 pages paper is a revised version of the paper, including
supplementary materia
Statistical Model of Shape Moments with Active Contour Evolution for Shape Detection and Segmentation
This paper describes a novel method for shape representation and robust image segmentation. The proposed method combines two well known methodologies, namely, statistical shape models and active contours implemented in level set framework. The shape detection is achieved by maximizing a posterior function that consists of a prior shape probability model and image likelihood function conditioned on shapes. The statistical shape model is built as a result of a learning process based on nonparametric probability estimation in a PCA reduced feature space formed by the Legendre moments of training silhouette images. A greedy strategy is applied to optimize the proposed cost function by iteratively evolving an implicit active contour in the image space and subsequent constrained optimization of the evolved shape in the reduced shape feature space. Experimental results presented in the paper demonstrate that the proposed method, contrary to many other active contour segmentation methods, is highly resilient to severe random and structural noise that could be present in the data
Natural (non-)informative priors for skew-symmetric distributions
In this paper, we present an innovative method for constructing proper priors
for the skewness (shape) parameter in the skew-symmetric family of
distributions. The proposed method is based on assigning a prior distribution
on the perturbation effect of the shape parameter, which is quantified in terms
of the Total Variation distance. We discuss strategies to translate prior
beliefs about the asymmetry of the data into an informative prior distribution
of this class. We show via a Monte Carlo simulation study that our
noninformative priors induce posterior distributions with good frequentist
properties, similar to those of the Jeffreys prior. Our informative priors
yield better results than their competitors from the literature. We also
propose a scale- and location-invariant prior structure for models with unknown
location and scale parameters and provide sufficient conditions for the
propriety of the corresponding posterior distribution. Illustrative examples
are presented using simulated and real data.Comment: 30 pages, 3 figure
The Bayesian Decision Tree Technique with a Sweeping Strategy
The uncertainty of classification outcomes is of crucial importance for many
safety critical applications including, for example, medical diagnostics. In
such applications the uncertainty of classification can be reliably estimated
within a Bayesian model averaging technique that allows the use of prior
information. Decision Tree (DT) classification models used within such a
technique gives experts additional information by making this classification
scheme observable. The use of the Markov Chain Monte Carlo (MCMC) methodology
of stochastic sampling makes the Bayesian DT technique feasible to perform.
However, in practice, the MCMC technique may become stuck in a particular DT
which is far away from a region with a maximal posterior. Sampling such DTs
causes bias in the posterior estimates, and as a result the evaluation of
classification uncertainty may be incorrect. In a particular case, the negative
effect of such sampling may be reduced by giving additional prior information
on the shape of DTs. In this paper we describe a new approach based on sweeping
the DTs without additional priors on the favorite shape of DTs. The
performances of Bayesian DT techniques with the standard and sweeping
strategies are compared on a synthetic data as well as on real datasets.
Quantitatively evaluating the uncertainty in terms of entropy of class
posterior probabilities, we found that the sweeping strategy is superior to the
standard strategy
Methods for inference in large multiple-equation Markov-switching models
The inference for hidden Markov chain models in which the structure is a multiple-equation macroeconomic model raises a number of difficulties that are not as likely to appear in smaller models. One is likely to want to allow for many states in the Markov chain without allowing the number of free parameters in the transition matrix to grow as the square of the number of states but also without losing a convenient form for the posterior distribution of the transition matrix. Calculation of marginal data densities for assessing model fit is often difficult in high-dimensional models and seems particularly difficult in these models. This paper gives a detailed explanation of methods we have found to work to overcome these difficulties. It also makes suggestions for maximizing posterior density and initiating Markov chain Monte Carlo simulations that provide some robustness against the complex shape of the likelihood in these models. These difficulties and remedies are likely to be useful generally for Bayesian inference in large time-series models. The paper includes some discussion of model specification issues that apply particularly to structural vector autoregressions with a Markov-switching structure.
Informed MCMC with Bayesian Neural Networks for Facial Image Analysis
Computer vision tasks are difficult because of the large variability in the
data that is induced by changes in light, background, partial occlusion as well
as the varying pose, texture, and shape of objects. Generative approaches to
computer vision allow us to overcome this difficulty by explicitly modeling the
physical image formation process. Using generative object models, the analysis
of an observed image is performed via Bayesian inference of the posterior
distribution. This conceptually simple approach tends to fail in practice
because of several difficulties stemming from sampling the posterior
distribution: high-dimensionality and multi-modality of the posterior
distribution as well as expensive simulation of the rendering process. The main
difficulty of sampling approaches in a computer vision context is choosing the
proposal distribution accurately so that maxima of the posterior are explored
early and the algorithm quickly converges to a valid image interpretation. In
this work, we propose to use a Bayesian Neural Network for estimating an image
dependent proposal distribution. Compared to a standard Gaussian random walk
proposal, this accelerates the sampler in finding regions of the posterior with
high value. In this way, we can significantly reduce the number of samples
needed to perform facial image analysis.Comment: Accepted to the Bayesian Deep Learning Workshop at NeurIPS 201
Probabilistic Intra-Retinal Layer Segmentation in 3-D OCT Images Using Global Shape Regularization
With the introduction of spectral-domain optical coherence tomography (OCT),
resulting in a significant increase in acquisition speed, the fast and accurate
segmentation of 3-D OCT scans has become evermore important. This paper
presents a novel probabilistic approach, that models the appearance of retinal
layers as well as the global shape variations of layer boundaries. Given an OCT
scan, the full posterior distribution over segmentations is approximately
inferred using a variational method enabling efficient probabilistic inference
in terms of computationally tractable model components: Segmenting a full 3-D
volume takes around a minute. Accurate segmentations demonstrate the benefit of
using global shape regularization: We segmented 35 fovea-centered 3-D volumes
with an average unsigned error of 2.46 0.22 {\mu}m as well as 80 normal
and 66 glaucomatous 2-D circular scans with errors of 2.92 0.53 {\mu}m
and 4.09 0.98 {\mu}m respectively. Furthermore, we utilized the inferred
posterior distribution to rate the quality of the segmentation, point out
potentially erroneous regions and discriminate normal from pathological scans.
No pre- or postprocessing was required and we used the same set of parameters
for all data sets, underlining the robustness and out-of-the-box nature of our
approach.Comment: Accepted for publication in Medical Image Analysis (MIA), Elsevie
- …