4,805 research outputs found
Connecting the Dots: Towards Continuous Time Hamiltonian Monte Carlo
Continuous time Hamiltonian Monte Carlo is introduced, as a powerful
alternative to Markov chain Monte Carlo methods for continuous target
distributions. The method is constructed in two steps: First Hamiltonian
dynamics are chosen as the deterministic dynamics in a continuous time
piecewise deterministic Markov process. Under very mild restrictions, such a
process will have the desired target distribution as an invariant distribution.
Secondly, the numerical implementation of such processes, based on adaptive
numerical integration of second order ordinary differential equations is
considered. The numerical implementation yields an approximate, yet highly
robust algorithm that, unlike conventional Hamiltonian Monte Carlo, enables the
exploitation of the complete Hamiltonian trajectories (hence the title). The
proposed algorithm may yield large speedups and improvements in stability
relative to relevant benchmarks, while incurring numerical errors that are
negligible relative to the overall Monte Carlo errors
A Coverage Study of the CMSSM Based on ATLAS Sensitivity Using Fast Neural Networks Techniques
We assess the coverage properties of confidence and credible intervals on the
CMSSM parameter space inferred from a Bayesian posterior and the profile
likelihood based on an ATLAS sensitivity study. In order to make those
calculations feasible, we introduce a new method based on neural networks to
approximate the mapping between CMSSM parameters and weak-scale particle
masses. Our method reduces the computational effort needed to sample the CMSSM
parameter space by a factor of ~ 10^4 with respect to conventional techniques.
We find that both the Bayesian posterior and the profile likelihood intervals
can significantly over-cover and identify the origin of this effect to physical
boundaries in the parameter space. Finally, we point out that the effects
intrinsic to the statistical procedure are conflated with simplifications to
the likelihood functions from the experiments themselves.Comment: Further checks about accuracy of neural network approximation, fixed
typos, added refs. Main results unchanged. Matches version accepted by JHE
Bayesian coherent analysis of in-spiral gravitational wave signals with a detector network
The present operation of the ground-based network of gravitational-wave laser
interferometers in "enhanced" configuration brings the search for gravitational
waves into a regime where detection is highly plausible. The development of
techniques that allow us to discriminate a signal of astrophysical origin from
instrumental artefacts in the interferometer data and to extract the full range
of information are some of the primary goals of the current work. Here we
report the details of a Bayesian approach to the problem of inference for
gravitational wave observations using a network of instruments, for the
computation of the Bayes factor between two hypotheses and the evaluation of
the marginalised posterior density functions of the unknown model parameters.
The numerical algorithm to tackle the notoriously difficult problem of the
evaluation of large multi-dimensional integrals is based on a technique known
as Nested Sampling, which provides an attractive alternative to more
traditional Markov-chain Monte Carlo (MCMC) methods. We discuss the details of
the implementation of this algorithm and its performance against a Gaussian
model of the background noise, considering the specific case of the signal
produced by the in-spiral of binary systems of black holes and/or neutron
stars, although the method is completely general and can be applied to other
classes of sources. We also demonstrate the utility of this approach by
introducing a new coherence test to distinguish between the presence of a
coherent signal of astrophysical origin in the data of multiple instruments and
the presence of incoherent accidental artefacts, and the effects on the
estimation of the source parameters as a function of the number of instruments
in the network.Comment: 22 page
Bayesian Analysis of ODE's: solver optimal accuracy and Bayes factors
In most relevant cases in the Bayesian analysis of ODE inverse problems, a
numerical solver needs to be used. Therefore, we cannot work with the exact
theoretical posterior distribution but only with an approximate posterior
deriving from the error in the numerical solver. To compare a numerical and the
theoretical posterior distributions we propose to use Bayes Factors (BF),
considering both of them as models for the data at hand. We prove that the
theoretical vs a numerical posterior BF tends to 1, in the same order (of the
step size used) as the numerical forward map solver does. For higher order
solvers (eg. Runge-Kutta) the Bayes Factor is already nearly 1 for step sizes
that would take far less computational effort. Considerable CPU time may be
saved by using coarser solvers that nevertheless produce practically error free
posteriors. Two examples are presented where nearly 90% CPU time is saved while
all inference results are identical to using a solver with a much finer time
step.Comment: 28 pages, 6 figure
A Bayesian periodogram finds evidence for three planets in HD 11964
A Bayesian multi-planet Kepler periodogram has been developed for the
analysis of precision radial velocity data (Gregory 2005b and 2007). The
periodogram employs a parallel tempering Markov chain Monte Carlo algorithm.
The HD 11964 data (Butler et al. 2006) has been re-analyzed using 1, 2, 3 and 4
planet models. Assuming that all the models are equally probable a priori, the
three planet model is found to be >= 600 times more probable than the next most
probable model which is a two planet model. The most probable model exhibits
three periods of 38.02+0.06-0.05, 360+-4 and 1924+44-43 d, and eccentricities
of 0.22+0.11-0.22, 0.63+0.34-0.17 and 0.05+0.03-0.05, respectively. Assuming
the three signals (each one consistent with a Keplerian orbit) are caused by
planets, the corresponding limits on planetary mass (M sin i) and semi-major
axis are 0.090+0.15-0.14 M_J, 0.253+-0.009 au, 0.21+0.06-0.07 M_J, 1.13+-0.04
au, 0.77+-0.08 M_J, 3.46+-0.13 au, respectively. The small difference (1.3
sigma) between the 360 day period and one year suggests that it might be worth
investigating the barycentric correction for the HD 11964 data
An Efficient Interpolation Technique for Jump Proposals in Reversible-Jump Markov Chain Monte Carlo Calculations
Selection among alternative theoretical models given an observed data set is
an important challenge in many areas of physics and astronomy. Reversible-jump
Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for
performing Bayesian model selection, but it suffers from a fundamental
difficulty: it requires jumps between model parameter spaces, but cannot
efficiently explore both parameter spaces at once. Thus, a naive jump between
parameter spaces is unlikely to be accepted in the MCMC algorithm and
convergence is correspondingly slow. Here we demonstrate an interpolation
technique that uses samples from single-model MCMCs to propose inter-model
jumps from an approximation to the single-model posterior of the target
parameter space. The interpolation technique, based on a kD-tree data
structure, is adaptive and efficient in modest dimensionality. We show that our
technique leads to improved convergence over naive jumps in an RJMCMC, and
compare it to other proposals in the literature to improve the convergence of
RJMCMCs. We also demonstrate the use of the same interpolation technique as a
way to construct efficient "global" proposal distributions for single-model
MCMCs without prior knowledge of the structure of the posterior distribution,
and discuss improvements that permit the method to be used in
higher-dimensional spaces efficiently.Comment: Minor revision to match published versio
BAMBI: blind accelerated multimodal Bayesian inference
In this paper we present an algorithm for rapid Bayesian analysis that
combines the benefits of nested sampling and artificial neural networks. The
blind accelerated multimodal Bayesian inference (BAMBI) algorithm implements
the MultiNest package for nested sampling as well as the training of an
artificial neural network (NN) to learn the likelihood function. In the case of
computationally expensive likelihoods, this allows the substitution of a much
more rapid approximation in order to increase significantly the speed of the
analysis. We begin by demonstrating, with a few toy examples, the ability of a
NN to learn complicated likelihood surfaces. BAMBI's ability to decrease
running time for Bayesian inference is then demonstrated in the context of
estimating cosmological parameters from Wilkinson Microwave Anisotropy Probe
and other observations. We show that valuable speed increases are achieved in
addition to obtaining NNs trained on the likelihood functions for the different
model and data combinations. These NNs can then be used for an even faster
follow-up analysis using the same likelihood and different priors. This is a
fully general algorithm that can be applied, without any pre-processing, to
other problems with computationally expensive likelihood functions.Comment: 12 pages, 8 tables, 17 figures; accepted by MNRAS; v2 to reflect
minor changes in published versio
- …