1,173 research outputs found
Accuracy of MAP segmentation with hidden Potts and Markov mesh prior models via Path Constrained Viterbi Training, Iterated Conditional Modes and Graph Cut based algorithms
In this paper, we study statistical classification accuracy of two different
Markov field environments for pixelwise image segmentation, considering the
labels of the image as hidden states and solving the estimation of such labels
as a solution of the MAP equation. The emission distribution is assumed the
same in all models, and the difference lays in the Markovian prior hypothesis
made over the labeling random field. The a priori labeling knowledge will be
modeled with a) a second order anisotropic Markov Mesh and b) a classical
isotropic Potts model. Under such models, we will consider three different
segmentation procedures, 2D Path Constrained Viterbi training for the Hidden
Markov Mesh, a Graph Cut based segmentation for the first order isotropic Potts
model, and ICM (Iterated Conditional Modes) for the second order isotropic
Potts model.
We provide a unified view of all three methods, and investigate goodness of
fit for classification, studying the influence of parameter estimation,
computational gain, and extent of automation in the statistical measures
Overall Accuracy, Relative Improvement and Kappa coefficient, allowing robust
and accurate statistical analysis on synthetic and real-life experimental data
coming from the field of Dental Diagnostic Radiography. All algorithms, using
the learned parameters, generate good segmentations with little interaction
when the images have a clear multimodal histogram. Suboptimal learning proves
to be frail in the case of non-distinctive modes, which limits the complexity
of usable models, and hence the achievable error rate as well.
All Matlab code written is provided in a toolbox available for download from
our website, following the Reproducible Research Paradigm
Hidden Gibbs random fields model selection using Block Likelihood Information Criterion
Performing model selection between Gibbs random fields is a very challenging
task. Indeed, due to the Markovian dependence structure, the normalizing
constant of the fields cannot be computed using standard analytical or
numerical methods. Furthermore, such unobserved fields cannot be integrated out
and the likelihood evaluztion is a doubly intractable problem. This forms a
central issue to pick the model that best fits an observed data. We introduce a
new approximate version of the Bayesian Information Criterion. We partition the
lattice into continuous rectangular blocks and we approximate the probability
measure of the hidden Gibbs field by the product of some Gibbs distributions
over the blocks. On that basis, we estimate the likelihood and derive the Block
Likelihood Information Criterion (BLIC) that answers model choice questions
such as the selection of the dependency structure or the number of latent
states. We study the performances of BLIC for those questions. In addition, we
present a comparison with ABC algorithms to point out that the novel criterion
offers a better trade-off between time efficiency and reliable results
Estimating the granularity coefficient of a Potts-Markov random field within an MCMC algorithm
This paper addresses the problem of estimating the Potts parameter B jointly
with the unknown parameters of a Bayesian model within a Markov chain Monte
Carlo (MCMC) algorithm. Standard MCMC methods cannot be applied to this problem
because performing inference on B requires computing the intractable
normalizing constant of the Potts model. In the proposed MCMC method the
estimation of B is conducted using a likelihood-free Metropolis-Hastings
algorithm. Experimental results obtained for synthetic data show that
estimating B jointly with the other unknown parameters leads to estimation
results that are as good as those obtained with the actual value of B. On the
other hand, assuming that the value of B is known can degrade estimation
performance significantly if this value is incorrect. To illustrate the
interest of this method, the proposed algorithm is successfully applied to real
bidimensional SAR and tridimensional ultrasound images
A Hierarchical Bayesian Model for Frame Representation
In many signal processing problems, it may be fruitful to represent the
signal under study in a frame. If a probabilistic approach is adopted, it
becomes then necessary to estimate the hyper-parameters characterizing the
probability distribution of the frame coefficients. This problem is difficult
since in general the frame synthesis operator is not bijective. Consequently,
the frame coefficients are not directly observable. This paper introduces a
hierarchical Bayesian model for frame representation. The posterior
distribution of the frame coefficients and model hyper-parameters is derived.
Hybrid Markov Chain Monte Carlo algorithms are subsequently proposed to sample
from this posterior distribution. The generated samples are then exploited to
estimate the hyper-parameters and the frame coefficients of the target signal.
Validation experiments show that the proposed algorithms provide an accurate
estimation of the frame coefficients and hyper-parameters. Application to
practical problems of image denoising show the impact of the resulting Bayesian
estimation on the recovered signal quality
Variational bayes for estimating the parameters of a hidden Potts model
Hidden Markov random field models provide an appealing representation of images and other spatial problems. The drawback is that inference is not straightforward for these models as the normalisation constant for the likelihood is generally intractable except for very small observation sets. Variational methods are an emerging tool for Bayesian inference and they have already been successfully applied in other contexts. Focusing on the particular case of a hidden Potts model with Gaussian noise, we show how variational Bayesian methods can be applied to hidden Markov random field inference. To tackle the obstacle of the intractable normalising constant for the likelihood, we explore alternative estimation approaches for incorporation into the variational Bayes algorithm. We consider a pseudo-likelihood approach as well as the more recent reduced dependence approximation of the normalisation constant. To illustrate the effectiveness of these approaches we present empirical results from the analysis of simulated datasets. We also analyse a real dataset and compare results with those of previous analyses as well as those obtained from the recently developed auxiliary variable MCMC method and the recursive MCMC method. Our results show that the variational Bayesian analyses can be carried out much faster than the MCMC analyses and produce good estimates of model parameters. We also found that the reduced dependence approximation of the normalisation constant outperformed the pseudo-likelihood approximation in our analysis of real and synthetic datasets
Semi-supervised linear spectral unmixing using a hierarchical Bayesian model for hyperspectral imagery
This paper proposes a hierarchical Bayesian model that can be used for semi-supervised hyperspectral image unmixing. The model assumes that the pixel reflectances result from linear combinations of pure component spectra contaminated by an additive Gaussian noise. The abundance parameters appearing in this model satisfy positivity and additivity constraints. These constraints are naturally expressed in a Bayesian context by using appropriate abundance prior distributions. The posterior distributions of the unknown model parameters are then derived. A Gibbs sampler allows one to draw samples distributed according to the posteriors of interest and to estimate the unknown abundances. An extension of the algorithm is finally studied for mixtures with unknown numbers of spectral components belonging to a know library. The performance of the different unmixing strategies is evaluated via simulations conducted on synthetic and real data
- …