988 research outputs found
Genuinely Distributed Byzantine Machine Learning
Machine Learning (ML) solutions are nowadays distributed, according to the
so-called server/worker architecture. One server holds the model parameters
while several workers train the model. Clearly, such architecture is prone to
various types of component failures, which can be all encompassed within the
spectrum of a Byzantine behavior. Several approaches have been proposed
recently to tolerate Byzantine workers. Yet all require trusting a central
parameter server. We initiate in this paper the study of the ``general''
Byzantine-resilient distributed machine learning problem where no individual
component is trusted.
We show that this problem can be solved in an asynchronous system, despite
the presence of Byzantine parameter servers and
Byzantine workers (which is optimal). We present a new algorithm, ByzSGD, which
solves the general Byzantine-resilient distributed machine learning problem by
relying on three major schemes. The first, Scatter/Gather, is a communication
scheme whose goal is to bound the maximum drift among models on correct
servers. The second, Distributed Median Contraction (DMC), leverages the
geometric properties of the median in high dimensional spaces to bring
parameters within the correct servers back close to each other, ensuring
learning convergence. The third, Minimum-Diameter Averaging (MDA), is a
statistically-robust gradient aggregation rule whose goal is to tolerate
Byzantine workers. MDA requires loose bound on the variance of non-Byzantine
gradient estimates, compared to existing alternatives (e.g., Krum).
Interestingly, ByzSGD ensures Byzantine resilience without adding communication
rounds (on a normal path), compared to vanilla non-Byzantine alternatives.
ByzSGD requires, however, a larger number of messages which, we show, can be
reduced if we assume synchrony.Comment: This is a merge of arXiv:1905.03853 and arXiv:1911.07537;
arXiv:1911.07537 will be retracte
Byzantine Stochastic Gradient Descent
This paper studies the problem of distributed stochastic optimization in an
adversarial setting where, out of the machines which allegedly compute
stochastic gradients every iteration, an -fraction are Byzantine, and
can behave arbitrarily and adversarially. Our main result is a variant of
stochastic gradient descent (SGD) which finds -approximate
minimizers of convex functions in iterations. In contrast, traditional
mini-batch SGD needs iterations,
but cannot tolerate Byzantine failures. Further, we provide a lower bound
showing that, up to logarithmic factors, our algorithm is
information-theoretically optimal both in terms of sampling complexity and time
complexity
- …