3,254 research outputs found
The Convergence of Sparsified Gradient Methods
Distributed training of massive machine learning models, in particular deep
neural networks, via Stochastic Gradient Descent (SGD) is becoming commonplace.
Several families of communication-reduction methods, such as quantization,
large-batch methods, and gradient sparsification, have been proposed. To date,
gradient sparsification methods - where each node sorts gradients by magnitude,
and only communicates a subset of the components, accumulating the rest locally
- are known to yield some of the largest practical gains. Such methods can
reduce the amount of communication per step by up to three orders of magnitude,
while preserving model accuracy. Yet, this family of methods currently has no
theoretical justification.
This is the question we address in this paper. We prove that, under analytic
assumptions, sparsifying gradients by magnitude with local error correction
provides convergence guarantees, for both convex and non-convex smooth
objectives, for data-parallel SGD. The main insight is that sparsification
methods implicitly maintain bounds on the maximum impact of stale updates,
thanks to selection by magnitude. Our analysis and empirical validation also
reveal that these methods do require analytical conditions to converge well,
justifying existing heuristics.Comment: NIPS 2018 - Advances in Neural Information Processing Systems;
Authors in alphabetic orde
Automatic differentiation in machine learning: a survey
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in
machine learning. Automatic differentiation (AD), also called algorithmic
differentiation or simply "autodiff", is a family of techniques similar to but
more general than backpropagation for efficiently and accurately evaluating
derivatives of numeric functions expressed as computer programs. AD is a small
but established field with applications in areas including computational fluid
dynamics, atmospheric sciences, and engineering design optimization. Until very
recently, the fields of machine learning and AD have largely been unaware of
each other and, in some cases, have independently discovered each other's
results. Despite its relevance, general-purpose AD has been missing from the
machine learning toolbox, a situation slowly changing with its ongoing adoption
under the names "dynamic computational graphs" and "differentiable
programming". We survey the intersection of AD and machine learning, cover
applications where AD has direct relevance, and address the main implementation
techniques. By precisely defining the main differentiation techniques and their
interrelationships, we aim to bring clarity to the usage of the terms
"autodiff", "automatic differentiation", and "symbolic differentiation" as
these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure
Communication Lower Bounds for Statistical Estimation Problems via a Distributed Data Processing Inequality
We study the tradeoff between the statistical error and communication cost of
distributed statistical estimation problems in high dimensions. In the
distributed sparse Gaussian mean estimation problem, each of the machines
receives data points from a -dimensional Gaussian distribution with
unknown mean which is promised to be -sparse. The machines
communicate by message passing and aim to estimate the mean . We
provide a tight (up to logarithmic factors) tradeoff between the estimation
error and the number of bits communicated between the machines. This directly
leads to a lower bound for the distributed \textit{sparse linear regression}
problem: to achieve the statistical minimax error, the total communication is
at least , where is the number of observations that
each machine receives and is the ambient dimension. These lower results
improve upon [Sha14,SD'14] by allowing multi-round iterative communication
model. We also give the first optimal simultaneous protocol in the dense case
for mean estimation.
As our main technique, we prove a \textit{distributed data processing
inequality}, as a generalization of usual data processing inequalities, which
might be of independent interest and useful for other problems.Comment: To appear at STOC 2016. Fixed typos in theorem 4.5 and incorporated
reviewers' suggestion
- …