17 research outputs found
Weight Optimization for Consensus Algorithms with Correlated Switching Topology
We design the weights in consensus algorithms with spatially correlated
random topologies. These arise with: 1) networks with spatially correlated
random link failures and 2) networks with randomized averaging protocols. We
show that the weight optimization problem is convex for both symmetric and
asymmetric random graphs. With symmetric random networks, we choose the
consensus mean squared error (MSE) convergence rate as optimization criterion
and explicitly express this rate as a function of the link formation
probabilities, the link formation spatial correlations, and the consensus
weights. We prove that the MSE convergence rate is a convex, nonsmooth function
of the weights, enabling global optimization of the weights for arbitrary link
formation probabilities and link correlation structures. We extend our results
to the case of asymmetric random links. We adopt as optimization criterion the
mean squared deviation (MSdev) of the nodes states from the current average
state. We prove that MSdev is a convex function of the weights. Simulations
show that significant performance gain is achieved with our weight design
method when compared with methods available in the literature.Comment: 30 pages, 5 figures, submitted to IEEE Transactions On Signal
Processin
When gossip meets consensus : convergence in correlated random WSNs
Peer ReviewedPostprint (author’s final draft
Consensus and Products of Random Stochastic Matrices: Exact Rate for Convergence in Probability
Distributed consensus and other linear systems with system stochastic
matrices emerge in various settings, like opinion formation in social
networks, rendezvous of robots, and distributed inference in sensor networks.
The matrices are often random, due to, e.g., random packet dropouts in
wireless sensor networks. Key in analyzing the performance of such systems is
studying convergence of matrix products . In this paper, we
find the exact exponential rate for the convergence in probability of the
product of such matrices when time grows large, under the assumption that
the 's are symmetric and independent identically distributed in time.
Further, for commonly used random models like with gossip and link failure, we
show that the rate is found by solving a min-cut problem and, hence, easily
computable. Finally, we apply our results to optimally allocate the sensors'
transmission power in consensus+innovations distributed detection
Diffusion Adaptation over Networks under Imperfect Information Exchange and Non-stationary Data
Adaptive networks rely on in-network and collaborative processing among
distributed agents to deliver enhanced performance in estimation and inference
tasks. Information is exchanged among the nodes, usually over noisy links. The
combination weights that are used by the nodes to fuse information from their
neighbors play a critical role in influencing the adaptation and tracking
abilities of the network. This paper first investigates the mean-square
performance of general adaptive diffusion algorithms in the presence of various
sources of imperfect information exchanges, quantization errors, and model
non-stationarities. Among other results, the analysis reveals that link noise
over the regression data modifies the dynamics of the network evolution in a
distinct way, and leads to biased estimates in steady-state. The analysis also
reveals how the network mean-square performance is dependent on the combination
weights. We use these observations to show how the combination weights can be
optimized and adapted. Simulation results illustrate the theoretical findings
and match well with theory.Comment: 36 pages, 7 figures, to appear in IEEE Transactions on Signal
Processing, June 201
Online Resource Inference in Network Utility Maximization Problems
The amount of transmitted data in computer networks is expected to grow
considerably in the future, putting more and more pressure on the network
infrastructures. In order to guarantee a good service, it then becomes
fundamental to use the network resources efficiently. Network Utility
Maximization (NUM) provides a framework to optimize the rate allocation when
network resources are limited. Unfortunately, in the scenario where the amount
of available resources is not known a priori, classical NUM solving methods do
not offer a viable solution. To overcome this limitation we design an overlay
rate allocation scheme that attempts to infer the actual amount of available
network resources while coordinating the users rate allocation. Due to the
general and complex model assumed for the congestion measurements, a passive
learning of the available resources would not lead to satisfying performance.
The coordination scheme must then perform active learning in order to speed up
the resources estimation and quickly increase the system performance. By
adopting an optimal learning formulation we are able to balance the tradeoff
between an accurate estimation, and an effective resources exploitation in
order to maximize the long term quality of the service delivered to the users