18,724 research outputs found
Generalizing Informed Sampling for Asymptotically Optimal Sampling-based Kinodynamic Planning via Markov Chain Monte Carlo
Asymptotically-optimal motion planners such as RRT* have been shown to
incrementally approximate the shortest path between start and goal states. Once
an initial solution is found, their performance can be dramatically improved by
restricting subsequent samples to regions of the state space that can
potentially improve the current solution. When the motion planning problem lies
in a Euclidean space, this region , called the informed set, can be
sampled directly. However, when planning with differential constraints in
non-Euclidean state spaces, no analytic solutions exists to sampling
directly.
State-of-the-art approaches to sampling in such domains such as
Hierarchical Rejection Sampling (HRS) may still be slow in high-dimensional
state space. This may cause the planning algorithm to spend most of its time
trying to produces samples in rather than explore it. In this paper,
we suggest an alternative approach to produce samples in the informed set
for a wide range of settings. Our main insight is to recast this
problem as one of sampling uniformly within the sub-level-set of an implicit
non-convex function. This recasting enables us to apply Monte Carlo sampling
methods, used very effectively in the Machine Learning and Optimization
communities, to solve our problem. We show for a wide range of scenarios that
using our sampler can accelerate the convergence rate to high-quality solutions
in high-dimensional problems
A Shuffled Complex Evolution Metropolis algorithm for optimization and uncertainty assessment of hydrologic model parameters
Markov Chain Monte Carlo (MCMC) methods have become increasingly popular for estimating the posterior probability distribution of parameters in hydrologic models. However, MCMC methods require the a priori definition of a proposal or sampling distribution, which determines the explorative capabilities and efficiency of the sampler and therefore the statistical properties of the Markov Chain and its rate of convergence. In this paper we present an MCMC sampler entitled the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), which is well suited to infer the posterior distribution of hydrologic model parameters. The SCEM-UA algorithm is a modified version of the original SCE-UA global optimization algorithm developed by Duan et al. [1992]. The SCEM-UA algorithm operates by merging the strengths of the Metropolis algorithm, controlled random search, competitive evolution, and complex shuffling in order to continuously update the proposal distribution and evolve the sampler to the posterior target distribution. Three case studies demonstrate that the adaptive capability of the SCEM-UA algorithm significantly reduces the number of model simulations needed to infer the posterior distribution of the parameters when compared with the traditional Metropolis-Hastings samplers
Patterns of Scalable Bayesian Inference
Datasets are growing not just in size but in complexity, creating a demand
for rich models and quantification of uncertainty. Bayesian methods are an
excellent fit for this demand, but scaling Bayesian inference is a challenge.
In response to this challenge, there has been considerable recent work based on
varying assumptions about model structure, underlying computational resources,
and the importance of asymptotic correctness. As a result, there is a zoo of
ideas with few clear overarching principles.
In this paper, we seek to identify unifying principles, patterns, and
intuitions for scaling Bayesian inference. We review existing work on utilizing
modern computing resources with both MCMC and variational approximation
techniques. From this taxonomy of ideas, we characterize the general principles
that have proven successful for designing scalable inference procedures and
comment on the path forward
Global parameter identification of stochastic reaction networks from single trajectories
We consider the problem of inferring the unknown parameters of a stochastic
biochemical network model from a single measured time-course of the
concentration of some of the involved species. Such measurements are available,
e.g., from live-cell fluorescence microscopy in image-based systems biology. In
addition, fluctuation time-courses from, e.g., fluorescence correlation
spectroscopy provide additional information about the system dynamics that can
be used to more robustly infer parameters than when considering only mean
concentrations. Estimating model parameters from a single experimental
trajectory enables single-cell measurements and quantification of cell--cell
variability. We propose a novel combination of an adaptive Monte Carlo sampler,
called Gaussian Adaptation, and efficient exact stochastic simulation
algorithms that allows parameter identification from single stochastic
trajectories. We benchmark the proposed method on a linear and a non-linear
reaction network at steady state and during transient phases. In addition, we
demonstrate that the present method also provides an ellipsoidal volume
estimate of the viable part of parameter space and is able to estimate the
physical volume of the compartment in which the observed reactions take place.Comment: Article in print as a book chapter in Springer's "Advances in Systems
Biology
Bayesian Optimization for Adaptive MCMC
This paper proposes a new randomized strategy for adaptive MCMC using
Bayesian optimization. This approach applies to non-differentiable objective
functions and trades off exploration and exploitation to reduce the number of
potentially costly objective function evaluations. We demonstrate the strategy
in the complex setting of sampling from constrained, discrete and densely
connected probabilistic graphical models where, for each variation of the
problem, one needs to adjust the parameters of the proposal mechanism
automatically to ensure efficient mixing of the Markov chains.Comment: This paper contains 12 pages and 6 figures. A similar version of this
paper has been submitted to AISTATS 2012 and is currently under revie
Informed Proposal Monte Carlo
Any search or sampling algorithm for solution of inverse problems needs
guidance to be efficient. Many algorithms collect and apply information about
the problem on the fly, and much improvement has been made in this way.
However, as a consequence of the the No-Free-Lunch Theorem, the only way we can
ensure a significantly better performance of search and sampling algorithms is
to build in as much information about the problem as possible. In the special
case of Markov Chain Monte Carlo sampling (MCMC) we review how this is done
through the choice of proposal distribution, and we show how this way of adding
more information about the problem can be made particularly efficient when
based on an approximate physics model of the problem. A highly nonlinear
inverse scattering problem with a high-dimensional model space serves as an
illustration of the gain of efficiency through this approach
- …