3 research outputs found
Supervised Learning Under Distributed Features
This work studies the problem of learning under both large datasets and
large-dimensional feature space scenarios. The feature information is assumed
to be spread across agents in a network, where each agent observes some of the
features. Through local cooperation, the agents are supposed to interact with
each other to solve an inference problem and converge towards the global
minimizer of an empirical risk. We study this problem exclusively in the primal
domain, and propose new and effective distributed solutions with guaranteed
convergence to the minimizer with linear rate under strong convexity. This is
achieved by combining a dynamic diffusion construction, a pipeline strategy,
and variance-reduced techniques. Simulation results illustrate the conclusions
Dynamic Average Diffusion with randomized Coordinate Updates
This work derives and analyzes an online learning strategy for tracking the
average of time-varying distributed signals by relying on randomized
coordinate-descent updates. During each iteration, each agent selects or
observes a random entry of the observation vector, and different agents may
select different entries of their observations before engaging in a
consultation step. Careful coordination of the interactions among agents is
necessary to avoid bias and ensure convergence. We provide a convergence
analysis for the proposed methods, and illustrate the results by means of
simulations
Diffusion gradient boosting for networked learning
Using duality arguments from optimization theory, this work develops an effective distributed gradient boosting strategy for inference and classification by networked clusters of learners. By sharing local dual variables with their immediate neighbors through a diffusion learning protocol, the clusters are able to match the performance of centralized boosting solutions even when the individual clusters only have access to partial information about the feature space