First-order methods with momentum such as Nesterov's fast gradient method are
very useful for convex optimization problems, but can exhibit undesirable
oscillations yielding slow convergence rates for some applications. An adaptive
restarting scheme can improve the convergence rate of the fast gradient method,
when the parameter of a strongly convex cost function is unknown or when the
iterates of the algorithm enter a locally strongly convex region. Recently, we
introduced the optimized gradient method, a first-order algorithm that has an
inexpensive per-iteration computational cost similar to that of the fast
gradient method, yet has a worst-case cost function rate that is twice faster
than that of the fast gradient method and that is optimal for large-dimensional
smooth convex problems. Building upon the success of accelerating the fast
gradient method using adaptive restart, this paper investigates similar
heuristic acceleration of the optimized gradient method. We first derive a new
first-order method that resembles the optimized gradient method for strongly
convex quadratic problems with known function parameters, yielding a linear
convergence rate that is faster than that of the analogous version of the fast
gradient method. We then provide a heuristic analysis and numerical experiments
that illustrate that adaptive restart can accelerate the convergence of the
optimized gradient method. Numerical results also illustrate that adaptive
restart is helpful for a proximal version of the optimized gradient method for
nonsmooth composite convex functions