Optimal Accelerated Variance Reduced EXTRA and DIGing for Strongly Convex and Smooth Decentralized Optimization

Abstract

We study stochastic decentralized optimization for the problem of training machine learning models with large-scale distributed data. We extend the famous EXTRA and DIGing methods with accelerated variance reduction (VR), and propose two methods, which require the time of O((nκs+n)log1ϵ)O((\sqrt{n\kappa_s}+n)\log\frac{1}{\epsilon}) stochastic gradient evaluations and O(κbκclog1ϵ)O(\sqrt{\kappa_b\kappa_c}\log\frac{1}{\epsilon}) communication rounds to reach precision ϵ\epsilon, where κs\kappa_s and κb\kappa_b are the stochastic condition number and batch condition number for strongly convex and smooth problems, κc\kappa_c is the condition number of the communication network, and nn is the sample size on each distributed node. Our stochastic gradient computation complexity is the same as the single-machine accelerated variance reduction methods, such as Katyusha, and our communication complexity is the same as the accelerated full batch decentralized methods, such as MSDA, and they are both optimal. We also propose the non-accelerated VR based EXTRA and DIGing, and provide explicit complexities, for example, the O((κs+n)log1ϵ)O((\kappa_s+n)\log\frac{1}{\epsilon}) stochastic gradient computation complexity and the O((κb+κc)log1ϵ)O((\kappa_b+\kappa_c)\log\frac{1}{\epsilon}) communication complexity for the VR based EXTRA. The two complexities are also the same as the ones of single-machine VR methods, such as SAG, SAGA, and SVRG, and the non-accelerated full batch decentralized methods, such as EXTRA, respectively

    Similar works

    Full text

    thumbnail-image

    Available Versions