1 research outputs found
Adaptive Gradient Methods for Constrained Convex Optimization and Variational Inequalities
We provide new adaptive first-order methods for constrained convex
optimization. Our main algorithms AdaACSA and AdaAGD+ are accelerated methods,
which are universal in the sense that they achieve nearly-optimal convergence
rates for both smooth and non-smooth functions, even when they only have access
to stochastic gradients. In addition, they do not require any prior knowledge
on how the objective function is parametrized, since they automatically adjust
their per-coordinate learning rate. These can be seen as truly accelerated
Adagrad methods for constrained optimization.
We complement them with a simpler algorithm AdaGrad+ which enjoys the same
features, and achieves the standard non-accelerated convergence rate. We also
present a set of new results involving adaptive methods for unconstrained
optimization and monotone operators.Comment: Full version of AAAI-21 paper. The current version adds an
experimental evaluation and revises the expositio