29 research outputs found
Stochastic Subgradient Algorithms for Strongly Convex Optimization over Distributed Networks
We study diffusion and consensus based optimization of a sum of unknown
convex objective functions over distributed networks. The only access to these
functions is through stochastic gradient oracles, each of which is only
available at a different node, and a limited number of gradient oracle calls is
allowed at each node. In this framework, we introduce a convex optimization
algorithm based on the stochastic gradient descent (SGD) updates. Particularly,
we use a carefully designed time-dependent weighted averaging of the SGD
iterates, which yields a convergence rate of
after gradient updates for each node on
a network of nodes. We then show that after gradient oracle calls, the
average SGD iterate achieves a mean square deviation (MSD) of
. This rate of convergence is optimal as it
matches the performance lower bound up to constant terms. Similar to the SGD
algorithm, the computational complexity of the proposed algorithm also scales
linearly with the dimensionality of the data. Furthermore, the communication
load of the proposed method is the same as the communication load of the SGD
algorithm. Thus, the proposed algorithm is highly efficient in terms of
complexity and communication load. We illustrate the merits of the algorithm
with respect to the state-of-art methods over benchmark real life data sets and
widely studied network topologies
A Multitask Diffusion Strategy with Optimized Inter-Cluster Cooperation
We consider a multitask estimation problem where nodes in a network are
divided into several connected clusters, with each cluster performing a
least-mean-squares estimation of a different random parameter vector. Inspired
by the adapt-then-combine diffusion strategy, we propose a multitask diffusion
strategy whose mean stability can be ensured whenever individual nodes are
stable in the mean, regardless of the inter-cluster cooperation weights. In
addition, the proposed strategy is able to achieve an asymptotically unbiased
estimation, when the parameters have same mean. We also develop an
inter-cluster cooperation weights selection scheme that allows each node in the
network to locally optimize its inter-cluster cooperation weights. Numerical
results demonstrate that our approach leads to a lower average steady-state
network mean-square deviation, compared with using weights selected by various
other commonly adopted methods in the literature.Comment: 30 pages, 8 figures, submitted to IEEE Journal of Selected Topics in
Signal Processin
Compressive Diffusion Strategies Over Distributed Networks for Reduced Communication Load
We study the compressive diffusion strategies over distributed networks based
on the diffusion implementation and adaptive extraction of the information from
the compressed diffusion data. We demonstrate that one can achieve a comparable
performance with the full information exchange configurations, even if the
diffused information is compressed into a scalar or a single bit. To this end,
we provide a complete performance analysis for the compressive diffusion
strategies. We analyze the transient, steady-state and tracking performance of
the configurations in which the diffused data is compressed into a scalar or a
single-bit. We propose a new adaptive combination method improving the
convergence performance of the compressive diffusion strategies further. In the
new method, we introduce one more freedom-of-dimension in the combination
matrix and adapt it by using the conventional mixture approach in order to
enhance the convergence performance for any possible combination rule used for
the full diffusion configuration. We demonstrate that our theoretical analysis
closely follow the ensemble averaged results in our simulations. We provide
numerical examples showing the improved convergence performance with the new
adaptive combination method.Comment: Submitted to IEEE Transactions on Signal Processin