21,234 research outputs found

    Estimating the granularity coefficient of a Potts-Markov random field within an MCMC algorithm

    Get PDF
    This paper addresses the problem of estimating the Potts parameter B jointly with the unknown parameters of a Bayesian model within a Markov chain Monte Carlo (MCMC) algorithm. Standard MCMC methods cannot be applied to this problem because performing inference on B requires computing the intractable normalizing constant of the Potts model. In the proposed MCMC method the estimation of B is conducted using a likelihood-free Metropolis-Hastings algorithm. Experimental results obtained for synthetic data show that estimating B jointly with the other unknown parameters leads to estimation results that are as good as those obtained with the actual value of B. On the other hand, assuming that the value of B is known can degrade estimation performance significantly if this value is incorrect. To illustrate the interest of this method, the proposed algorithm is successfully applied to real bidimensional SAR and tridimensional ultrasound images

    Computing the Cramer-Rao bound of Markov random field parameters: Application to the Ising and the Potts models

    Get PDF
    This report considers the problem of computing the Cramer-Rao bound for the parameters of a Markov random field. Computation of the exact bound is not feasible for most fields of interest because their likelihoods are intractable and have intractable derivatives. We show here how it is possible to formulate the computation of the bound as a statistical inference problem that can be solve approximately, but with arbitrarily high accuracy, by using a Monte Carlo method. The proposed methodology is successfully applied on the Ising and the Potts models.% where it is used to assess the performance of three state-of-the art estimators of the parameter of these Markov random fields

    Simulation in Statistics

    Full text link
    Simulation has become a standard tool in statistics because it may be the only tool available for analysing some classes of probabilistic models. We review in this paper simulation tools that have been specifically derived to address statistical challenges and, in particular, recent advances in the areas of adaptive Markov chain Monte Carlo (MCMC) algorithms, and approximate Bayesian calculation (ABC) algorithms.Comment: Draft of an advanced tutorial paper for the Proceedings of the 2011 Winter Simulation Conferenc

    Improving Simulation Efficiency of MCMC for Inverse Modeling of Hydrologic Systems with a Kalman-Inspired Proposal Distribution

    Full text link
    Bayesian analysis is widely used in science and engineering for real-time forecasting, decision making, and to help unravel the processes that explain the observed data. These data are some deterministic and/or stochastic transformations of the underlying parameters. A key task is then to summarize the posterior distribution of these parameters. When models become too difficult to analyze analytically, Monte Carlo methods can be used to approximate the target distribution. Of these, Markov chain Monte Carlo (MCMC) methods are particularly powerful. Such methods generate a random walk through the parameter space and, under strict conditions of reversibility and ergodicity, will successively visit solutions with frequency proportional to the underlying target density. This requires a proposal distribution that generates candidate solutions starting from an arbitrary initial state. The speed of the sampled chains converging to the target distribution deteriorates rapidly, however, with increasing parameter dimensionality. In this paper, we introduce a new proposal distribution that enhances significantly the efficiency of MCMC simulation for highly parameterized models. This proposal distribution exploits the cross-covariance of model parameters, measurements and model outputs, and generates candidate states much alike the analysis step in the Kalman filter. We embed the Kalman-inspired proposal distribution in the DREAM algorithm during burn-in, and present several numerical experiments with complex, high-dimensional or multi-modal target distributions. Results demonstrate that this new proposal distribution can greatly improve simulation efficiency of MCMC. Specifically, we observe a speed-up on the order of 10-30 times for groundwater models with more than one-hundred parameters

    Efficient Recursions for General Factorisable Models

    Get PDF
    Let n S-valued categorical variables be jointly distributed according to a distribution known only up to an unknown normalising constant. For an unnormalised joint likelihood expressible as a product of factors, we give an algebraic recursion which can be used for computing the normalising constant and other summations. A saving in computation is achieved when each factor contains a lagged subset of the components combining in the joint distribution, with maximum computational efficiency as the subsets attain their minimum size. If each subset contains at most r+1 of the n components in the joint distribution, we term this a lag-r model, whose normalising constant can be computed using a forward recursion in O(Sr+1) computations, as opposed to O(Sn) for the direct computation. We show how a lag-r model represents a Markov random field and allows a neighbourhood structure to be related to the unnormalised joint likelihood. We illustrate the method by showing how the normalising constant of the Ising or autologistic model can be computed

    Patterns of Scalable Bayesian Inference

    Full text link
    Datasets are growing not just in size but in complexity, creating a demand for rich models and quantification of uncertainty. Bayesian methods are an excellent fit for this demand, but scaling Bayesian inference is a challenge. In response to this challenge, there has been considerable recent work based on varying assumptions about model structure, underlying computational resources, and the importance of asymptotic correctness. As a result, there is a zoo of ideas with few clear overarching principles. In this paper, we seek to identify unifying principles, patterns, and intuitions for scaling Bayesian inference. We review existing work on utilizing modern computing resources with both MCMC and variational approximation techniques. From this taxonomy of ideas, we characterize the general principles that have proven successful for designing scalable inference procedures and comment on the path forward

    Collaborative sparse regression using spatially correlated supports - Application to hyperspectral unmixing

    Get PDF
    This paper presents a new Bayesian collaborative sparse regression method for linear unmixing of hyperspectral images. Our contribution is twofold; first, we propose a new Bayesian model for structured sparse regression in which the supports of the sparse abundance vectors are a priori spatially correlated across pixels (i.e., materials are spatially organised rather than randomly distributed at a pixel level). This prior information is encoded in the model through a truncated multivariate Ising Markov random field, which also takes into consideration the facts that pixels cannot be empty (i.e, there is at least one material present in each pixel), and that different materials may exhibit different degrees of spatial regularity. Secondly, we propose an advanced Markov chain Monte Carlo algorithm to estimate the posterior probabilities that materials are present or absent in each pixel, and, conditionally to the maximum marginal a posteriori configuration of the support, compute the MMSE estimates of the abundance vectors. A remarkable property of this algorithm is that it self-adjusts the values of the parameters of the Markov random field, thus relieving practitioners from setting regularisation parameters by cross-validation. The performance of the proposed methodology is finally demonstrated through a series of experiments with synthetic and real data and comparisons with other algorithms from the literature
    • 

    corecore