1 research outputs found
Continuous-time Lower Bounds for Gradient-based Algorithms
This article derives lower bounds on the convergence rate of continuous-time
gradient-based optimization algorithms. The algorithms are subjected to a
time-normalization constraint that avoids a reparametrization of time in order
to make the discussion of continuous-time convergence rates meaningful. We
reduce the multi-dimensional problem to a single dimension, recover well-known
lower bounds from the discrete-time setting, and provide insight into why these
lower bounds occur. We present algorithms that achieve the proposed lower
bounds, even when the function class under consideration includes certain
nonconvex functions.Comment: 13 page