300,691 research outputs found
Recommended from our members
DISTRIBUTED LEARNING ALGORITHMS: COMMUNICATION EFFICIENCY AND ERROR RESILIENCE
In modern day machine learning applications such as self-driving cars, recommender systems, robotics, genetics etc., the size of the training data has grown to the point that it has become essential to design distributed learning algorithms. A general framework for the distributed learning is \emph{data parallelism} where the data is distributed among the \emph{worker machines} for parallel processing and computation to speed up learning. With billions of devices such as cellphones, computers etc., the data is inherently distributed and stored locally in the users\u27 devices. Learning in this set up is popularly known as \emph{Federated Learning}. The speed-up due to distributed framework gets hindered by some fundamental problems such as straggler workers, communication bottleneck due to high communication overhead between workers and central server, adversarial failure popularly know as \emph{Byzantine failure}. In this thesis, we study and develop distributed algorithms that are error resilient and communication efficient.
First, we address the problem of straggler workers where the learning is delayed due to slow workers in the distributed setup. To mitigate the effect of the stragglers, we employ \textbf{LDPC} (low density parity check) code to encode the data and implement gradient descent algorithm in the distributed setup. Second, we present a family of vector quantization schemes \emph{vqSGD} (vector quantized Stochastic Gradient Descent ) that provides an asymptotic reduction in the communication cost with convergence guarantees in the first order distributed optimization. We also showed that \emph{vqSGD} provides strong privacy guarantee. Third, we address the problem of Byzantine failure together with communication-efficiency in the first order gradient descent algorithm. We consider a generic class of - approximate compressor for communication efficiency and employ a simple \emph{norm based thresholding} scheme to make the learning algorithm robust to Byzantine failures. We establish statistical error rate for non-convex smooth loss. Moreover, we analyze the compressed gradient descent algorithm with error feedback in a distributed setting and in the presence of Byzantine worker machines. Fourth, we employ the generic class of - approximate compressor to develop a communication efficient second order Newton-type algorithm and provide rate of convergence for smooth objective. Fifth, we propose \textbf{COMRADE} (COMmunication-efficient and Robust Approximate Distributed nEwton ), an iterative second order algorithm that is communication efficient as well as robust against Byzantine failures. Sixth, we propose a distributed \emph{cubic-regularized Newton } algorithm that can escape saddle points effectively for non-convex loss function and find a local minima . Furthermore, the proposed algorithm can resist the attack of the Byzantine machines, which may create \emph{fake local minima} near the saddle points of the loss function, also known as saddle-point attack
Distributed Linear Parameter Estimation: Asymptotically Efficient Adaptive Strategies
The paper considers the problem of distributed adaptive linear parameter
estimation in multi-agent inference networks. Local sensing model information
is only partially available at the agents and inter-agent communication is
assumed to be unpredictable. The paper develops a generic mixed time-scale
stochastic procedure consisting of simultaneous distributed learning and
estimation, in which the agents adaptively assess their relative observation
quality over time and fuse the innovations accordingly. Under rather weak
assumptions on the statistical model and the inter-agent communication, it is
shown that, by properly tuning the consensus potential with respect to the
innovation potential, the asymptotic information rate loss incurred in the
learning process may be made negligible. As such, it is shown that the agent
estimates are asymptotically efficient, in that their asymptotic covariance
coincides with that of a centralized estimator (the inverse of the centralized
Fisher information rate for Gaussian systems) with perfect global model
information and having access to all observations at all times. The proof
techniques are mainly based on convergence arguments for non-Markovian mixed
time scale stochastic approximation procedures. Several approximation results
developed in the process are of independent interest.Comment: Submitted to SIAM Journal on Control and Optimization journal.
Initial Submission: Sept. 2011. Revised: Aug. 201
Over-the-Air Computation for Distributed Systems: Something Old and Something New
Facing the upcoming era of Internet-of-Things and connected intelligence,
efficient information processing, computation and communication design becomes
a key challenge in large-scale intelligent systems. Recently, Over-the-Air
(OtA) computation has been proposed for data aggregation and distributed
function computation over a large set of network nodes. Theoretical foundations
for this concept exist for a long time, but it was mainly investigated within
the context of wireless sensor networks. There are still many open questions
when applying OtA computation in different types of distributed systems where
modern wireless communication technology is applied. In this article, we
provide a comprehensive overview of the OtA computation principle and its
applications in distributed learning, control, and inference systems, for both
server-coordinated and fully decentralized architectures. Particularly, we
highlight the importance of the statistical heterogeneity of data and wireless
channels, the temporal evolution of model updates, and the choice of
performance metrics, for the communication design in OtA federated learning
(FL) systems. Several key challenges in privacy, security and robustness
aspects of OtA FL are also identified for further investigation.Comment: 7 pages, 3 figures, submitted for possible publicatio
- …