4 research outputs found
Gossip Learning with Linear Models on Fully Distributed Data
Machine learning over fully distributed data poses an important problem in
peer-to-peer (P2P) applications. In this model we have one data record at each
network node, but without the possibility to move raw data due to privacy
considerations. For example, user profiles, ratings, history, or sensor
readings can represent this case. This problem is difficult, because there is
no possibility to learn local models, the system model offers almost no
guarantees for reliability, yet the communication cost needs to be kept low.
Here we propose gossip learning, a generic approach that is based on multiple
models taking random walks over the network in parallel, while applying an
online learning algorithm to improve themselves, and getting combined via
ensemble learning methods. We present an instantiation of this approach for the
case of classification with linear models. Our main contribution is an ensemble
learning method which---through the continuous combination of the models in the
network---implements a virtual weighted voting mechanism over an exponential
number of models at practically no extra cost as compared to independent random
walks. We prove the convergence of the method theoretically, and perform
extensive experiments on benchmark datasets. Our experimental analysis
demonstrates the performance and robustness of the proposed approach.Comment: The paper was published in the journal Concurrency and Computation:
Practice and Experience
http://onlinelibrary.wiley.com/journal/10.1002/%28ISSN%291532-0634 (DOI:
http://dx.doi.org/10.1002/cpe.2858). The modifications are based on the
suggestions from the reviewer
Predictive Handling of Asynchronous Concept Drifts in Distributed Environments
In a distributed computing environment, peers collaboratively learn to classify concepts of interest from each other. When external changes happen and their concepts drift, the peers should adapt to avoid increase in misclassification errors. The problem of adaptation becomes more difficult when the changes are asynchronous, i.e., when peers experience drifts at different times. We address this problem by developing an ensemble approach, PINE, that combines reactive adaptation via drift detection, and proactive handling of upcoming changes via early warning and adaptation across the peers. With empirical study on simulated and real world datasets, we show that PINE handles asynchronous concept drifts better and faster than current state-of-the-art approaches, which have been designed to work in less challenging environments. In addition, PINE is parameter insensitive and incurs less communication cost while achieving better accuracy.
Keywords: Classi¿cation, Distributed Systems, Concept Drift