31 research outputs found
Adai: Separating the Effects of Adaptive Learning Rate and Momentum Inertia
Adaptive Momentum Estimation (Adam), which combines Adaptive Learning Rate
and Momentum, is the most popular stochastic optimizer for accelerating the
training of deep neural networks. However, empirically Adam often generalizes
worse than Stochastic Gradient Descent (SGD). We unveil the mystery of this
behavior based on the diffusion theoretical framework. Specifically, we
disentangle the effects of Adaptive Learning Rate and Momentum of the Adam
dynamics on saddle-point escaping and minima selection. We prove that Adaptive
Learning Rate can escape saddle points efficiently, but cannot select flat
minima as SGD does. In contrast, Momentum provides a drift effect to help the
training process pass through saddle points, and almost does not affect flat
minima selection. This theoretically explains why SGD (with Momentum)
generalizes better, while Adam generalizes worse but converges faster.
Furthermore, motivated by the analysis, we design a novel adaptive optimization
framework named Adaptive Inertia, which uses parameter-wise adaptive inertia to
accelerate the training and provably favors flat minima as well as SGD. Our
extensive experiments demonstrate that the proposed adaptive inertia method can
generalize significantly better than SGD and conventional adaptive gradient
methods.Comment: 28 pages, 11 figures, Adam, Adaptive Inerti
Analysis of Langevin Monte Carlo via convex optimization
In this paper, we provide new insights on the Unadjusted Langevin Algorithm.
We show that this method can be formulated as a first order optimization
algorithm of an objective functional defined on the Wasserstein space of order
. Using this interpretation and techniques borrowed from convex
optimization, we give a non-asymptotic analysis of this method to sample from
logconcave smooth target distribution on . Based on this
interpretation, we propose two new methods for sampling from a non-smooth
target distribution, which we analyze as well. Besides, these new algorithms
are natural extensions of the Stochastic Gradient Langevin Dynamics (SGLD)
algorithm, which is a popular extension of the Unadjusted Langevin Algorithm.
Similar to SGLD, they only rely on approximations of the gradient of the target
log density and can be used for large-scale Bayesian inference