4 research outputs found
A qualitative difference between gradient flows of convex functions in finite- and infinite-dimensional Hilbert spaces
We consider gradient flow/gradient descent and heavy ball/accelerated
gradient descent optimization for convex objective functions. In the gradient
flow case, we prove the following:
1. If does not have a minimizer, the convergence can
be arbitrarily slow.
2. If does have a minimizer, the excess energy is
integrable/summable in time. In particular, as
.
3. In Hilbert spaces, this is optimal: can decay to as
slowly as any given function which is monotone decreasing and integrable at
, even for a fixed quadratic objective.
4. In finite dimension (or more generally, for all gradient flow curves of
finite length), this is not optimal: We prove that there are convex monotone
decreasing integrable functions which decrease to zero slower than
for the gradient flow of any convex function on .
For instance, we show that any gradient flow of a convex function in
finite dimension satisfies .
This improves on the commonly reported rate and provides a sharp
characterization of the energy decay law. We also note that it is impossible to
establish a rate for any function which satisfies
, even asymptotically.
Similar results are obtained in related settings for (1) discrete time
gradient descent, (2) stochastic gradient descent with multiplicative noise and
(3) the heavy ball ODE. In the case of stochastic gradient descent, the
summability of is used to prove that almost surely - an improvement on the convergence almost surely up to a
subsequence which follows from the decay estimate