4,505 research outputs found
Implicit Langevin Algorithms for Sampling From Log-concave Densities
For sampling from a log-concave density, we study implicit integrators
resulting from -method discretization of the overdamped Langevin
diffusion stochastic differential equation. Theoretical and algorithmic
properties of the resulting sampling methods for and a
range of step sizes are established. Our results generalize and extend prior
works in several directions. In particular, for , we prove
geometric ergodicity and stability of the resulting methods for all step sizes.
We show that obtaining subsequent samples amounts to solving a strongly-convex
optimization problem, which is readily achievable using one of numerous
existing methods. Numerical examples supporting our theoretical analysis are
also presented
Pathwise Accuracy and Ergodicity of Metropolized Integrators for SDEs
Metropolized integrators for ergodic stochastic differential equations (SDE)
are proposed which (i) are ergodic with respect to the (known) equilibrium
distribution of the SDE and (ii) approximate pathwise the solutions of the SDE
on finite time intervals. Both these properties are demonstrated in the paper
and precise strong error estimates are obtained. It is also shown that the
Metropolized integrator retains these properties even in situations where the
drift in the SDE is nonglobally Lipschitz, and vanilla explicit integrators for
SDEs typically become unstable and fail to be ergodic.Comment: 46 pages, 5 figure
Kinetic energy choice in Hamiltonian/hybrid Monte Carlo
We consider how different choices of kinetic energy in Hamiltonian Monte
Carlo affect algorithm performance. To this end, we introduce two quantities
which can be easily evaluated, the composite gradient and the implicit noise.
Results are established on integrator stability and geometric convergence, and
we show that choices of kinetic energy that result in heavy-tailed momentum
distributions can exhibit an undesirable negligible moves property, which we
define. A general efficiency-robustness trade off is outlined, and
implementations which rely on approximate gradients are also discussed. Two
numerical studies illustrate our theoretical findings, showing that the
standard choice which results in a Gaussian momentum distribution is not always
optimal in terms of either robustness or efficiency.Comment: 15 pages (+7 page supplement, included here as an appendix), 2
figures (+1 in supplement
Generalizing Informed Sampling for Asymptotically Optimal Sampling-based Kinodynamic Planning via Markov Chain Monte Carlo
Asymptotically-optimal motion planners such as RRT* have been shown to
incrementally approximate the shortest path between start and goal states. Once
an initial solution is found, their performance can be dramatically improved by
restricting subsequent samples to regions of the state space that can
potentially improve the current solution. When the motion planning problem lies
in a Euclidean space, this region , called the informed set, can be
sampled directly. However, when planning with differential constraints in
non-Euclidean state spaces, no analytic solutions exists to sampling
directly.
State-of-the-art approaches to sampling in such domains such as
Hierarchical Rejection Sampling (HRS) may still be slow in high-dimensional
state space. This may cause the planning algorithm to spend most of its time
trying to produces samples in rather than explore it. In this paper,
we suggest an alternative approach to produce samples in the informed set
for a wide range of settings. Our main insight is to recast this
problem as one of sampling uniformly within the sub-level-set of an implicit
non-convex function. This recasting enables us to apply Monte Carlo sampling
methods, used very effectively in the Machine Learning and Optimization
communities, to solve our problem. We show for a wide range of scenarios that
using our sampler can accelerate the convergence rate to high-quality solutions
in high-dimensional problems
- …