160 research outputs found
LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning
Distributed learning systems have enabled training large-scale models over
large amount of data in significantly shorter time. In this paper, we focus on
decentralized distributed deep learning systems and aim to achieve differential
privacy with good convergence rate and low communication cost. To achieve this
goal, we propose a new learning algorithm LEASGD (Leader-Follower Elastic
Averaging Stochastic Gradient Descent), which is driven by a novel
Leader-Follower topology and a differential privacy model.We provide a
theoretical analysis of the convergence rate and the trade-off between the
performance and privacy in the private setting.The experimental results show
that LEASGD outperforms state-of-the-art decentralized learning algorithm DPSGD
by achieving steadily lower loss within the same iterations and by reducing the
communication cost by 30%. In addition, LEASGD spends less differential privacy
budget and has higher final accuracy result than DPSGD under private setting
Recommended from our members
Federated Learning for Collaborative Prognosis
Modern industrial assets generate prodigious condition monitoring data. Various prognosis techniques can use this data to predict the asset’s remaining useful life. But the data in most asset fleets is distributed across multiple assets, bound by the privacy policies of the operators, and often legally protected. Such peculiar characteristics make data-driven prognosis an interesting problem. In this paper, we propose Federated Learning as a solution to the above mentioned challenges. Federated Learning enables the manufacturer to utilise condition monitoring data without moving it away from the corresponding assets. Concretely, we demonstrate Federated Averaging algorithm to train feed-forward, and recurrent neural networks for predicting failures in a simulated turbofan fleet. We also analyse the dependence of prediction quality on the various learning parameters.1. Siemens Industrial Turbomachinery U
Attacks on Robust Distributed Learning Schemes via Sensitivity Curve Maximization
Distributed learning paradigms, such as federated or decentralized learning,
allow a collection of agents to solve global learning and optimization problems
through limited local interactions. Most such strategies rely on a mixture of
local adaptation and aggregation steps, either among peers or at a central
fusion center. Classically, aggregation in distributed learning is based on
averaging, which is statistically efficient, but susceptible to attacks by even
a small number of malicious agents. This observation has motivated a number of
recent works, which develop robust aggregation schemes by employing robust
variations of the mean. We present a new attack based on sensitivity curve
maximization (SCM), and demonstrate that it is able to disrupt existing robust
aggregation schemes by injecting small, but effective perturbations
- …