204 research outputs found
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Due to the high communication cost in distributed and federated learning
problems, methods relying on compression of communicated messages are becoming
increasingly popular. While in other contexts the best performing gradient-type
methods invariably rely on some form of acceleration/momentum to reduce the
number of iterations, there are no methods which combine the benefits of both
gradient compression and acceleration. In this paper, we remedy this situation
and propose the first accelerated compressed gradient descent (ACGD) methods.
In the single machine regime, we prove that ACGD enjoys the rate
for
-strongly convex problems and
for convex problems,
respectively, where is the compression parameter. Our results improve
upon the existing non-accelerated rates and ,
respectively, and recover the optimal rates of accelerated gradient descent as
a special case when no compression () is applied. We further propose
a distributed variant of ACGD (called ADIANA) and prove the convergence rate
, where is the number of devices/workers and
hides the logarithmic factor . This improves upon the
previous best result achieved by the DIANA method of Mishchenko et al. (2019).
Finally, we conduct several experiments on real-world datasets which
corroborate our theoretical results and confirm the practical superiority of
our accelerated methods.Comment: ICML 202
- …