22,082 research outputs found
Optimization Monte Carlo: Efficient and Embarrassingly Parallel Likelihood-Free Inference
We describe an embarrassingly parallel, anytime Monte Carlo method for
likelihood-free models. The algorithm starts with the view that the
stochasticity of the pseudo-samples generated by the simulator can be
controlled externally by a vector of random numbers u, in such a way that the
outcome, knowing u, is deterministic. For each instantiation of u we run an
optimization procedure to minimize the distance between summary statistics of
the simulator and the data. After reweighing these samples using the prior and
the Jacobian (accounting for the change of volume in transforming from the
space of summary statistics to the space of parameters) we show that this
weighted ensemble represents a Monte Carlo estimate of the posterior
distribution. The procedure can be run embarrassingly parallel (each node
handling one sample) and anytime (by allocating resources to the worst
performing sample). The procedure is validated on six experiments.Comment: NIPS 2015 camera read
Recommended from our members
The scheduling of sparse matrix-vector multiplication on a massively parallel dap computer
An efficient data structure is presented which supports general unstructured sparse matrix-vector multiplications on a Distributed Array of Processors (DAP). This approach seeks to reduce the inter-processor data movements and organises the operations in batches of massively parallel steps by a heuristic scheduling procedure performed on the host computer.
The resulting data structure is of particular relevance to iterative schemes for solving linear systems. Performance results for matrices taken from well known Linear Programming (LP) test problems are presented and analysed
Pre-processing for approximate Bayesian computation in image analysis
Most of the existing algorithms for approximate Bayesian computation (ABC)
assume that it is feasible to simulate pseudo-data from the model at each
iteration. However, the computational cost of these simulations can be
prohibitive for high dimensional data. An important example is the Potts model,
which is commonly used in image analysis. Images encountered in real world
applications can have millions of pixels, therefore scalability is a major
concern. We apply ABC with a synthetic likelihood to the hidden Potts model
with additive Gaussian noise. Using a pre-processing step, we fit a binding
function to model the relationship between the model parameters and the
synthetic likelihood parameters. Our numerical experiments demonstrate that the
precomputed binding function dramatically improves the scalability of ABC,
reducing the average runtime required for model fitting from 71 hours to only 7
minutes. We also illustrate the method by estimating the smoothing parameter
for remotely sensed satellite imagery. Without precomputation, Bayesian
inference is impractical for datasets of that scale.Comment: 5th IMS-ISBA joint meeting (MCMSki IV
- …