188 research outputs found
A Discussion on Parallelization Schemes for Stochastic Vector Quantization Algorithms
This paper studies parallelization schemes for stochastic Vector Quantization
algorithms in order to obtain time speed-ups using distributed resources. We
show that the most intuitive parallelization scheme does not lead to better
performances than the sequential algorithm. Another distributed scheme is
therefore introduced which obtains the expected speed-ups. Then, it is improved
to fit implementation on distributed architectures where communications are
slow and inter-machines synchronization too costly. The schemes are tested with
simulated distributed architectures and, for the last one, with Microsoft
Windows Azure platform obtaining speed-ups up to 32 Virtual Machines
Exponentially Fast Parameter Estimation in Networks Using Distributed Dual Averaging
In this paper we present an optimization-based view of distributed parameter
estimation and observational social learning in networks. Agents receive a
sequence of random, independent and identically distributed (i.i.d.) signals,
each of which individually may not be informative about the underlying true
state, but the signals together are globally informative enough to make the
true state identifiable. Using an optimization-based characterization of
Bayesian learning as proximal stochastic gradient descent (with
Kullback-Leibler divergence from a prior as a proximal function), we show how
to efficiently use a distributed, online variant of Nesterov's dual averaging
method to solve the estimation with purely local information. When the true
state is globally identifiable, and the network is connected, we prove that
agents eventually learn the true parameter using a randomized gossip scheme. We
demonstrate that with high probability the convergence is exponentially fast
with a rate dependent on the KL divergence of observations under the true state
from observations under the second likeliest state. Furthermore, our work also
highlights the possibility of learning under continuous adaptation of network
which is a consequence of employing constant, unit stepsize for the algorithm.Comment: 6 pages, To appear in Conference on Decision and Control 201
Online Learning of Dynamic Parameters in Social Networks
This paper addresses the problem of online learning in a dynamic setting. We
consider a social network in which each individual observes a private signal
about the underlying state of the world and communicates with her neighbors at
each time period. Unlike many existing approaches, the underlying state is
dynamic, and evolves according to a geometric random walk. We view the scenario
as an optimization problem where agents aim to learn the true state while
suffering the smallest possible loss. Based on the decomposition of the global
loss function, we introduce two update mechanisms, each of which generates an
estimate of the true state. We establish a tight bound on the rate of change of
the underlying state, under which individuals can track the parameter with a
bounded variance. Then, we characterize explicit expressions for the steady
state mean-square deviation(MSD) of the estimates from the truth, per
individual. We observe that only one of the estimators recovers the optimal
MSD, which underscores the impact of the objective function decomposition on
the learning quality. Finally, we provide an upper bound on the regret of the
proposed methods, measured as an average of errors in estimating the parameter
in a finite time.Comment: 12 pages, To appear in Neural Information Processing Systems (NIPS)
201
- …