768 research outputs found
Convergence Analysis of Accelerated Stochastic Gradient Descent under the Growth Condition
We study the convergence of accelerated stochastic gradient descent for
strongly convex objectives under the growth condition, which states that the
variance of stochastic gradient is bounded by a multiplicative part that grows
with the full gradient, and a constant additive part. Through the lens of the
growth condition, we investigate four widely used accelerated methods:
Nesterov's accelerated method (NAM), robust momentum method (RMM), accelerated
dual averaging method (ADAM), and implicit ADAM (iADAM). While these methods
are known to improve the convergence rate of SGD under the condition that the
stochastic gradient has bounded variance, it is not well understood how their
convergence rates are affected by the multiplicative noise. In this paper, we
show that these methods all converge to a neighborhood of the optimum with
accelerated convergence rates (compared to SGD) even under the growth
condition. In particular, NAM, RMM, iADAM enjoy acceleration only with a mild
multiplicative noise, while ADAM enjoys acceleration even with a large
multiplicative noise. Furthermore, we propose a generic tail-averaged scheme
that allows the accelerated rates of ADAM and iADAM to nearly attain the
theoretical lower bound (up to a logarithmic factor in the variance term)
From Optimization to Control: Quasi Policy Iteration
Recent control algorithms for Markov decision processes (MDPs) have been
designed using an implicit analogy with well-established optimization
algorithms. In this paper, we make this analogy explicit across four problem
classes with a unified solution characterization. This novel framework, in
turn, allows for a systematic transformation of algorithms from one domain to
the other. In particular, we identify equivalent optimization and control
algorithms that have already been pointed out in the existing literature, but
mostly in a scattered way. With this unifying framework in mind, we then
exploit two linear structural constraints specific to MDPs for approximating
the Hessian in a second-order-type algorithm from optimization, namely,
Anderson mixing. This leads to a novel first-order control algorithm that
modifies the standard value iteration (VI) algorithm by incorporating two new
directions and adaptive step sizes. While the proposed algorithm, coined as
quasi-policy iteration, has the same computational complexity as VI, it
interestingly exhibits an empirical convergence behavior similar to policy
iteration with a very low sensitivity to the discount factor
- …