257 research outputs found
Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes
Hogwild! implements asynchronous Stochastic Gradient Descent (SGD) where
multiple threads in parallel access a common repository containing training
data, perform SGD iterations and update shared state that represents a jointly
learned (global) model. We consider big data analysis where training data is
distributed among local data sets in a heterogeneous way -- and we wish to move
SGD computations to local compute nodes where local data resides. The results
of these local SGD computations are aggregated by a central "aggregator" which
mimics Hogwild!. We show how local compute nodes can start choosing small
mini-batch sizes which increase to larger ones in order to reduce communication
cost (round interaction with the aggregator). We improve state-of-the-art
literature and show ) communication rounds for heterogeneous data
for strongly convex problems, where is the total number of gradient
computations across all local compute nodes. For our scheme, we prove a
\textit{tight} and novel non-trivial convergence analysis for strongly convex
problems for {\em heterogeneous} data which does not use the bounded gradient
assumption as seen in many existing publications. The tightness is a
consequence of our proofs for lower and upper bounds of the convergence rate,
which show a constant factor difference. We show experimental results for plain
convex and non-convex problems for biased (i.e., heterogeneous) and unbiased
local data sets.Comment: arXiv admin note: substantial text overlap with arXiv:2007.09208
AISTATS 202
SparCML: High-Performance Sparse Communication for Machine Learning
Applying machine learning techniques to the quickly growing data in science
and industry requires highly-scalable algorithms. Large datasets are most
commonly processed "data parallel" distributed across many nodes. Each node's
contribution to the overall gradient is summed using a global allreduce. This
allreduce is the single communication and thus scalability bottleneck for most
machine learning workloads. We observe that frequently, many gradient values
are (close to) zero, leading to sparse of sparsifyable communications. To
exploit this insight, we analyze, design, and implement a set of
communication-efficient protocols for sparse input data, in conjunction with
efficient machine learning algorithms which can leverage these primitives. Our
communication protocols generalize standard collective operations, by allowing
processes to contribute arbitrary sparse input data vectors. Our generic
communication library, SparCML, extends MPI to support additional features,
such as non-blocking (asynchronous) operations and low-precision data
representations. As such, SparCML and its techniques will form the basis of
future highly-scalable machine learning frameworks
- …