1,184 research outputs found
Sparse-SignSGD with Majority Vote for Communication-Efficient Distributed Learning
The training efficiency of complex deep learning models can be significantly
improved through the use of distributed optimization. However, this process is
often hindered by a large amount of communication cost between workers and a
parameter server during iterations. To address this bottleneck, in this paper,
we present a new communication-efficient algorithm that offers the synergistic
benefits of both sparsification and sign quantization, called GD-MV.
The workers in GD-MV select the top- magnitude components of
their local gradient vector and only send the signs of these components to the
server. The server then aggregates the signs and returns the results via a
majority vote rule. Our analysis shows that, under certain mild conditions,
GD-MV can converge at the same rate as signSGD while significantly
reducing communication costs, if the sparsification parameter is properly
chosen based on the number of workers and the size of the deep learning model.
Experimental results using both independent and identically distributed (IID)
and non-IID datasets demonstrate that the GD-MV attains higher
accuracy than signSGD, significantly reducing communication costs. These
findings highlight the potential of GD-MV as a promising solution
for communication-efficient distributed optimization in deep learning.Comment: 13 pages, 7 figure
Natural Compression for Distributed Deep Learning
Modern deep learning models are often trained in parallel over a collection
of distributed machines to reduce training time. In such settings,
communication of model updates among machines becomes a significant performance
bottleneck and various lossy update compression techniques have been proposed
to alleviate this problem. In this work, we introduce a new, simple yet
theoretically and practically effective compression technique: {\em natural
compression (NC)}. Our technique is applied individually to all entries of the
to-be-compressed update vector and works by randomized rounding to the nearest
(negative or positive) power of two, which can be computed in a "natural" way
by ignoring the mantissa. We show that compared to no compression, NC increases
the second moment of the compressed vector by not more than the tiny factor
\nicefrac{9}{8}, which means that the effect of NC on the convergence speed
of popular training algorithms, such as distributed SGD, is negligible.
However, the communications savings enabled by NC are substantial, leading to
{\em - improvement in overall theoretical running time}. For
applications requiring more aggressive compression, we generalize NC to {\em
natural dithering}, which we prove is {\em exponentially better} than the
common random dithering technique. Our compression operators can be used on
their own or in combination with existing operators for a more aggressive
combined effect, and offer new state-of-the-art both in theory and practice.Comment: 8 pages, 20 pages of Appendix, 6 Tables, 14 Figure
The Convergence of Sparsified Gradient Methods
Distributed training of massive machine learning models, in particular deep
neural networks, via Stochastic Gradient Descent (SGD) is becoming commonplace.
Several families of communication-reduction methods, such as quantization,
large-batch methods, and gradient sparsification, have been proposed. To date,
gradient sparsification methods - where each node sorts gradients by magnitude,
and only communicates a subset of the components, accumulating the rest locally
- are known to yield some of the largest practical gains. Such methods can
reduce the amount of communication per step by up to three orders of magnitude,
while preserving model accuracy. Yet, this family of methods currently has no
theoretical justification.
This is the question we address in this paper. We prove that, under analytic
assumptions, sparsifying gradients by magnitude with local error correction
provides convergence guarantees, for both convex and non-convex smooth
objectives, for data-parallel SGD. The main insight is that sparsification
methods implicitly maintain bounds on the maximum impact of stale updates,
thanks to selection by magnitude. Our analysis and empirical validation also
reveal that these methods do require analytical conditions to converge well,
justifying existing heuristics.Comment: NIPS 2018 - Advances in Neural Information Processing Systems;
Authors in alphabetic orde
- …