23,351 research outputs found
Distributed Learning over Unreliable Networks
Most of today's distributed machine learning systems assume {\em reliable
networks}: whenever two machines exchange information (e.g., gradients or
models), the network should guarantee the delivery of the message. At the same
time, recent work exhibits the impressive tolerance of machine learning
algorithms to errors or noise arising from relaxed communication or
synchronization. In this paper, we connect these two trends, and consider the
following question: {\em Can we design machine learning systems that are
tolerant to network unreliability during training?} With this motivation, we
focus on a theoretical problem of independent interest---given a standard
distributed parameter server architecture, if every communication between the
worker and the server has a non-zero probability of being dropped, does
there exist an algorithm that still converges, and at what speed? The technical
contribution of this paper is a novel theoretical analysis proving that
distributed learning over unreliable network can achieve comparable convergence
rate to centralized or distributed learning over reliable networks. Further, we
prove that the influence of the packet drop rate diminishes with the growth of
the number of \textcolor{black}{parameter servers}. We map this theoretical
result onto a real-world scenario, training deep neural networks over an
unreliable network layer, and conduct network simulation to validate the system
improvement by allowing the networks to be unreliable
Distributed Learning with Infinitely Many Hypotheses
We consider a distributed learning setup where a network of agents
sequentially access realizations of a set of random variables with unknown
distributions. The network objective is to find a parametrized distribution
that best describes their joint observations in the sense of the
Kullback-Leibler divergence. Apart from recent efforts in the literature, we
analyze the case of countably many hypotheses and the case of a continuum of
hypotheses. We provide non-asymptotic bounds for the concentration rate of the
agents' beliefs around the correct hypothesis in terms of the number of agents,
the network parameters, and the learning abilities of the agents. Additionally,
we provide a novel motivation for a general set of distributed Non-Bayesian
update rules as instances of the distributed stochastic mirror descent
algorithm.Comment: Submitted to CDC201
Distributed Learning in Wireless Sensor Networks
The problem of distributed or decentralized detection and estimation in
applications such as wireless sensor networks has often been considered in the
framework of parametric models, in which strong assumptions are made about a
statistical description of nature. In certain applications, such assumptions
are warranted and systems designed from these models show promise. However, in
other scenarios, prior knowledge is at best vague and translating such
knowledge into a statistical model is undesirable. Applications such as these
pave the way for a nonparametric study of distributed detection and estimation.
In this paper, we review recent work of the authors in which some elementary
models for distributed learning are considered. These models are in the spirit
of classical work in nonparametric statistics and are applicable to wireless
sensor networks.Comment: Published in the Proceedings of the 42nd Annual Allerton Conference
on Communication, Control and Computing, University of Illinois, 200
Distributed Learning System Design: A New Approach and an Agenda for Future Research
This article presents a theoretical framework designed to guide distributed learning design, with the goal of enhancing the effectiveness of distributed learning systems. The authors begin with a review of the extant research on distributed learning design, and themes embedded in this literature are extracted and discussed to identify critical gaps that should be addressed by future work in this area. A conceptual framework that integrates instructional objectives, targeted competencies, instructional design considerations, and technological features is then developed to address the most pressing gaps in current research and practice. The rationale and logic underlying this framework is explicated. The framework is designed to help guide trainers and instructional designers through critical stages of the distributed learning system design process. In addition, it is intended to help researchers identify critical issues that should serve as the focus of future research efforts. Recommendations and future research directions are presented and discussed
- …