2 research outputs found
Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in the Presence of Stragglers
We consider the setting where a master wants to run a distributed stochastic
gradient descent (SGD) algorithm on workers each having a subset of the
data. Distributed SGD may suffer from the effect of stragglers, i.e., slow or
unresponsive workers who cause delays. One solution studied in the literature
is to wait at each iteration for the responses of the fastest workers
before updating the model, where is a fixed parameter. The choice of the
value of presents a trade-off between the runtime (i.e., convergence rate)
of SGD and the error of the model. Towards optimizing the error-runtime
trade-off, we investigate distributed SGD with adaptive . We first design an
adaptive policy for varying that optimizes this trade-off based on an upper
bound on the error as a function of the wall-clock time which we derive. Then,
we propose an algorithm for adaptive distributed SGD that is based on a
statistical heuristic. We implement our algorithm and provide numerical
simulations which confirm our intuition and theoretical analysis.Comment: Accepted to IEEE ICASSP 202