152,249 research outputs found

    On computational tools for Bayesian data analysis

    Full text link
    While Robert and Rousseau (2010) addressed the foundational aspects of Bayesian analysis, the current chapter details its practical aspects through a review of the computational methods available for approximating Bayesian procedures. Recent innovations like Monte Carlo Markov chain, sequential Monte Carlo methods and more recently Approximate Bayesian Computation techniques have considerably increased the potential for Bayesian applications and they have also opened new avenues for Bayesian inference, first and foremost Bayesian model choice.Comment: This is a chapter for the book "Bayesian Methods and Expert Elicitation" edited by Klaus Bocker, 23 pages, 9 figure

    Metropolis Methods for Quantum Monte Carlo Simulations

    Full text link
    Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, cluster methods for lattice models, the penalty method for coupled electron-ionic systems and the Bayesian analysis of imaginary time correlation functions.Comment: Proceedings of "Monte Carlo Methods in the Physical Sciences" Celebrating the 50th Anniversary of the Metropolis Algorith

    Monte Carlo Bayesian Reinforcement Learning

    Full text link
    Bayesian reinforcement learning (BRL) encodes prior knowledge of the world in a model and represents uncertainty in model parameters by maintaining a probability distribution over them. This paper presents Monte Carlo BRL (MC-BRL), a simple and general approach to BRL. MC-BRL samples a priori a finite set of hypotheses for the model parameter values and forms a discrete partially observable Markov decision process (POMDP) whose state space is a cross product of the state space for the reinforcement learning task and the sampled model parameter space. The POMDP does not require conjugate distributions for belief representation, as earlier works do, and can be solved relatively easily with point-based approximation algorithms. MC-BRL naturally handles both fully and partially observable worlds. Theoretical and experimental results show that the discrete POMDP approximates the underlying BRL task well with guaranteed performance.Comment: Appears in Proceedings of the 29th International Conference on Machine Learning (ICML 2012

    Scalable Importance Tempering and Bayesian Variable Selection

    Get PDF
    We propose a Monte Carlo algorithm to sample from high dimensional probability distributions that combines Markov chain Monte Carlo and importance sampling. We provide a careful theoretical analysis, including guarantees on robustness to high dimensionality, explicit comparison with standard Markov chain Monte Carlo methods and illustrations of the potential improvements in efficiency. Simple and concrete intuition is provided for when the novel scheme is expected to outperform standard schemes. When applied to Bayesian variable-selection problems, the novel algorithm is orders of magnitude more efficient than available alternative sampling schemes and enables fast and reliable fully Bayesian inferences with tens of thousand regressors.Comment: Online supplement not include
    corecore