45,336 research outputs found
Efficient Distributed Online Prediction and Stochastic Optimization with Approximate Distributed Averaging
We study distributed methods for online prediction and stochastic
optimization. Our approach is iterative: in each round nodes first perform
local computations and then communicate in order to aggregate information and
synchronize their decision variables. Synchronization is accomplished through
the use of a distributed averaging protocol. When an exact distributed
averaging protocol is used, it is known that the optimal regret bound of
can be achieved using the distributed mini-batch
algorithm of Dekel et al. (2012), where is the total number of samples
processed across the network. We focus on methods using approximate distributed
averaging protocols and show that the optimal regret bound can also be achieved
in this setting. In particular, we propose a gossip-based optimization method
which achieves the optimal regret bound. The amount of communication required
depends on the network topology through the second largest eigenvalue of the
transition matrix of a random walk on the network. In the setting of stochastic
optimization, the proposed gossip-based approach achieves nearly-linear
scaling: the optimization error is guaranteed to be no more than
after rounds, each of which involves
gossip iterations, when nodes communicate over a
well-connected graph. This scaling law is also observed in numerical
experiments on a cluster.Comment: 30 pages, 2 figure
Online Learning of Dynamic Parameters in Social Networks
This paper addresses the problem of online learning in a dynamic setting. We
consider a social network in which each individual observes a private signal
about the underlying state of the world and communicates with her neighbors at
each time period. Unlike many existing approaches, the underlying state is
dynamic, and evolves according to a geometric random walk. We view the scenario
as an optimization problem where agents aim to learn the true state while
suffering the smallest possible loss. Based on the decomposition of the global
loss function, we introduce two update mechanisms, each of which generates an
estimate of the true state. We establish a tight bound on the rate of change of
the underlying state, under which individuals can track the parameter with a
bounded variance. Then, we characterize explicit expressions for the steady
state mean-square deviation(MSD) of the estimates from the truth, per
individual. We observe that only one of the estimators recovers the optimal
MSD, which underscores the impact of the objective function decomposition on
the learning quality. Finally, we provide an upper bound on the regret of the
proposed methods, measured as an average of errors in estimating the parameter
in a finite time.Comment: 12 pages, To appear in Neural Information Processing Systems (NIPS)
201
- …