Distributed training of massive machine learning models, in particular deep
neural networks, via Stochastic Gradient Descent (SGD) is becoming commonplace.
Several families of communication-reduction methods, such as quantization,
large-batch methods, and gradient sparsification, have been proposed. To date,
gradient sparsification methods - where each node sorts gradients by magnitude,
and only communicates a subset of the components, accumulating the rest locally
- are known to yield some of the largest practical gains. Such methods can
reduce the amount of communication per step by up to three orders of magnitude,
while preserving model accuracy. Yet, this family of methods currently has no
theoretical justification.
This is the question we address in this paper. We prove that, under analytic
assumptions, sparsifying gradients by magnitude with local error correction
provides convergence guarantees, for both convex and non-convex smooth
objectives, for data-parallel SGD. The main insight is that sparsification
methods implicitly maintain bounds on the maximum impact of stale updates,
thanks to selection by magnitude. Our analysis and empirical validation also
reveal that these methods do require analytical conditions to converge well,
justifying existing heuristics.Comment: NIPS 2018 - Advances in Neural Information Processing Systems;
Authors in alphabetic orde