8,093 research outputs found
Bayesian Core: The Complete Solution Manual
This solution manual contains the unabridged and original solutions to all
the exercises proposed in Bayesian Core, along with R programs when necessary.Comment: 118+vii pages, 21 figures, 152 solution
On computational tools for Bayesian data analysis
While Robert and Rousseau (2010) addressed the foundational aspects of
Bayesian analysis, the current chapter details its practical aspects through a
review of the computational methods available for approximating Bayesian
procedures. Recent innovations like Monte Carlo Markov chain, sequential Monte
Carlo methods and more recently Approximate Bayesian Computation techniques
have considerably increased the potential for Bayesian applications and they
have also opened new avenues for Bayesian inference, first and foremost
Bayesian model choice.Comment: This is a chapter for the book "Bayesian Methods and Expert
Elicitation" edited by Klaus Bocker, 23 pages, 9 figure
Importance sampling methods for Bayesian discrimination between embedded models
This paper surveys some well-established approaches on the approximation of
Bayes factors used in Bayesian model choice, mostly as covered in Chen et al.
(2000). Our focus here is on methods that are based on importance sampling
strategies rather than variable dimension techniques like reversible jump MCMC,
including: crude Monte Carlo, maximum likelihood based importance sampling,
bridge and harmonic mean sampling, as well as Chib's method based on the
exploitation of a functional equality. We demonstrate in this survey how these
different methods can be efficiently implemented for testing the significance
of a predictive variable in a probit model. Finally, we compare their
performances on a real dataset
ABC likelihood-freee methods for model choice in Gibbs random fields
Gibbs random fields (GRF) are polymorphous statistical models that can be
used to analyse different types of dependence, in particular for spatially
correlated data. However, when those models are faced with the challenge of
selecting a dependence structure from many, the use of standard model choice
methods is hampered by the unavailability of the normalising constant in the
Gibbs likelihood. In particular, from a Bayesian perspective, the computation
of the posterior probabilities of the models under competition requires special
likelihood-free simulation techniques like the Approximate Bayesian Computation
(ABC) algorithm that is intensively used in population genetics. We show in
this paper how to implement an ABC algorithm geared towards model choice in the
general setting of Gibbs random fields, demonstrating in particular that there
exists a sufficient statistic across models. The accuracy of the approximation
to the posterior probabilities can be further improved by importance sampling
on the distribution of the models. The practical aspects of the method are
detailed through two applications, the test of an iid Bernoulli model versus a
first-order Markov chain, and the choice of a folding structure for two
proteins.Comment: 19 pages, 5 figures, to appear in Bayesian Analysi
Efficient learning in ABC algorithms
Approximate Bayesian Computation has been successfully used in population
genetics to bypass the calculation of the likelihood. These methods provide
accurate estimates of the posterior distribution by comparing the observed
dataset to a sample of datasets simulated from the model. Although
parallelization is easily achieved, computation times for ensuring a suitable
approximation quality of the posterior distribution are still high. To
alleviate the computational burden, we propose an adaptive, sequential
algorithm that runs faster than other ABC algorithms but maintains accuracy of
the approximation. This proposal relies on the sequential Monte Carlo sampler
of Del Moral et al. (2012) but is calibrated to reduce the number of
simulations from the model. The paper concludes with numerical experiments on a
toy example and on a population genetic study of Apis mellifera, where our
algorithm was shown to be faster than traditional ABC schemes
Bayesian Modelling and Inference on Mixtures of Distributions.
bayesian models;
Approximate Bayesian Computational methods
Also known as likelihood-free methods, approximate Bayesian computational
(ABC) methods have appeared in the past ten years as the most satisfactory
approach to untractable likelihood problems, first in genetics then in a
broader spectrum of applications. However, these methods suffer to some degree
from calibration difficulties that make them rather volatile in their
implementation and thus render them suspicious to the users of more traditional
Monte Carlo methods. In this survey, we study the various improvements and
extensions made to the original ABC algorithm over the recent years.Comment: 7 figure
- …