5,444 research outputs found
Analysis of Langevin Monte Carlo via convex optimization
In this paper, we provide new insights on the Unadjusted Langevin Algorithm.
We show that this method can be formulated as a first order optimization
algorithm of an objective functional defined on the Wasserstein space of order
. Using this interpretation and techniques borrowed from convex
optimization, we give a non-asymptotic analysis of this method to sample from
logconcave smooth target distribution on . Based on this
interpretation, we propose two new methods for sampling from a non-smooth
target distribution, which we analyze as well. Besides, these new algorithms
are natural extensions of the Stochastic Gradient Langevin Dynamics (SGLD)
algorithm, which is a popular extension of the Unadjusted Langevin Algorithm.
Similar to SGLD, they only rely on approximations of the gradient of the target
log density and can be used for large-scale Bayesian inference
- …