1,948 research outputs found
Analysis of Langevin Monte Carlo via convex optimization
In this paper, we provide new insights on the Unadjusted Langevin Algorithm.
We show that this method can be formulated as a first order optimization
algorithm of an objective functional defined on the Wasserstein space of order
. Using this interpretation and techniques borrowed from convex
optimization, we give a non-asymptotic analysis of this method to sample from
logconcave smooth target distribution on . Based on this
interpretation, we propose two new methods for sampling from a non-smooth
target distribution, which we analyze as well. Besides, these new algorithms
are natural extensions of the Stochastic Gradient Langevin Dynamics (SGLD)
algorithm, which is a popular extension of the Unadjusted Langevin Algorithm.
Similar to SGLD, they only rely on approximations of the gradient of the target
log density and can be used for large-scale Bayesian inference
Implicit Langevin Algorithms for Sampling From Log-concave Densities
For sampling from a log-concave density, we study implicit integrators
resulting from -method discretization of the overdamped Langevin
diffusion stochastic differential equation. Theoretical and algorithmic
properties of the resulting sampling methods for and a
range of step sizes are established. Our results generalize and extend prior
works in several directions. In particular, for , we prove
geometric ergodicity and stability of the resulting methods for all step sizes.
We show that obtaining subsequent samples amounts to solving a strongly-convex
optimization problem, which is readily achievable using one of numerous
existing methods. Numerical examples supporting our theoretical analysis are
also presented
- …