1,429 research outputs found

    Unsupervised bayesian convex deconvolution based on a field with an explicit partition function

    Full text link
    This paper proposes a non-Gaussian Markov field with a special feature: an explicit partition function. To the best of our knowledge, this is an original contribution. Moreover, the explicit expression of the partition function enables the development of an unsupervised edge-preserving convex deconvolution method. The method is fully Bayesian, and produces an estimate in the sense of the posterior mean, numerically calculated by means of a Monte-Carlo Markov Chain technique. The approach is particularly effective and the computational practicability of the method is shown on a simple simulated example

    Estimation of Viterbi path in Bayesian hidden Markov models

    Get PDF
    The article studies different methods for estimating the Viterbi path in the Bayesian framework. The Viterbi path is an estimate of the underlying state path in hidden Markov models (HMMs), which has a maximum joint posterior probability. Hence it is also called the maximum a posteriori (MAP) path. For an HMM with given parameters, the Viterbi path can be easily found with the Viterbi algorithm. In the Bayesian framework the Viterbi algorithm is not applicable and several iterative methods can be used instead. We introduce a new EM-type algorithm for finding the MAP path and compare it with various other methods for finding the MAP path, including the variational Bayes approach and MCMC methods. Examples with simulated data are used to compare the performance of the methods. The main focus is on non-stochastic iterative methods and our results show that the best of those methods work as well or better than the best MCMC methods. Our results demonstrate that when the primary goal is segmentation, then it is more reasonable to perform segmentation directly by considering the transition and emission parameters as nuisance parameters.Peer reviewe

    Approximate maximum likelihood estimation using data-cloning ABC

    Full text link
    A maximum likelihood methodology for a general class of models is presented, using an approximate Bayesian computation (ABC) approach. The typical target of ABC methods are models with intractable likelihoods, and we combine an ABC-MCMC sampler with so-called "data cloning" for maximum likelihood estimation. Accuracy of ABC methods relies on the use of a small threshold value for comparing simulations from the model and observed data. The proposed methodology shows how to use large threshold values, while the number of data-clones is increased to ease convergence towards an approximate maximum likelihood estimate. We show how to exploit the methodology to reduce the number of iterations of a standard ABC-MCMC algorithm and therefore reduce the computational effort, while obtaining reasonable point estimates. Simulation studies show the good performance of our approach on models with intractable likelihoods such as g-and-k distributions, stochastic differential equations and state-space models.Comment: 25 pages. Minor revision. It includes a parametric bootstrap for the exact MLE for the first example; includes mean bias and RMSE calculations for the third example. Forthcoming in Computational Statistics and Data Analysi
    • …
    corecore