4,201 research outputs found
Distributed Averaging via Lifted Markov Chains
Motivated by applications of distributed linear estimation, distributed
control and distributed optimization, we consider the question of designing
linear iterative algorithms for computing the average of numbers in a network.
Specifically, our interest is in designing such an algorithm with the fastest
rate of convergence given the topological constraints of the network. As the
main result of this paper, we design an algorithm with the fastest possible
rate of convergence using a non-reversible Markov chain on the given network
graph. We construct such a Markov chain by transforming the standard Markov
chain, which is obtained using the Metropolis-Hastings method. We call this
novel transformation pseudo-lifting. We apply our method to graphs with
geometry, or graphs with doubling dimension. Specifically, the convergence time
of our algorithm (equivalently, the mixing time of our Markov chain) is
proportional to the diameter of the network graph and hence optimal. As a
byproduct, our result provides the fastest mixing Markov chain given the
network topological constraints, and should naturally find their applications
in the context of distributed optimization, estimation and control
The Fastest Mixing Markov Process on a Graph and a Connection to a Maximum Variance Unfolding Problem
We consider a Markov process on a connected graph, with edges labeled with transition rates between the adjacent vertices. The distribution of the Markov process converges to the uniform distribution at a rate determined by the second smallest eigenvalue lambda_2 of the Laplacian of the weighted graph. In this paper we consider the problem of assigning transition rates to the edges so as to maximize lambda_2 subject to a linear constraint on the rates. This is the problem of finding the fastest mixing Markov process (FMMP) on the graph. We show that the FMMP problem is a convex optimization problem, which can in turn be expressed as a semidefinite program, and therefore effectively solved numerically. We formulate a dual of the FMMP problem and show that it has a natural geometric interpretation as a maximum variance unfolding (MVU) problem, , the problem of choosing a set of points to be as far apart as possible, measured by their variance, while respecting local distance constraints. This MVU problem is closely related to a problem recently proposed by Weinberger and Saul as a method for "unfolding" high-dimensional data that lies on a low-dimensional manifold. The duality between the FMMP and MVU problems sheds light on both problems, and allows us to characterize and, in some cases, find optimal solutions
Fastest mixing Markov chain on graphs with symmetries
We show how to exploit symmetries of a graph to efficiently compute the
fastest mixing Markov chain on the graph (i.e., find the transition
probabilities on the edges to minimize the second-largest eigenvalue modulus of
the transition probability matrix). Exploiting symmetry can lead to significant
reduction in both the number of variables and the size of matrices in the
corresponding semidefinite program, thus enable numerical solution of
large-scale instances that are otherwise computationally infeasible. We obtain
analytic or semi-analytic results for particular classes of graphs, such as
edge-transitive and distance-transitive graphs. We describe two general
approaches for symmetry exploitation, based on orbit theory and
block-diagonalization, respectively. We also establish the connection between
these two approaches.Comment: 39 pages, 15 figure
Convergence Speed of the Consensus Algorithm with Interference and Sparse Long-Range Connectivity
We analyze the effect of interference on the convergence rate of average
consensus algorithms, which iteratively compute the measurement average by
message passing among nodes. It is usually assumed that these algorithms
converge faster with a greater exchange of information (i.e., by increased
network connectivity) in every iteration. However, when interference is taken
into account, it is no longer clear if the rate of convergence increases with
network connectivity. We study this problem for randomly-placed
consensus-seeking nodes connected through an interference-limited network. We
investigate the following questions: (a) How does the rate of convergence vary
with increasing communication range of each node? and (b) How does this result
change when each node is allowed to communicate with a few selected far-off
nodes? When nodes schedule their transmissions to avoid interference, we show
that the convergence speed scales with , where is the
communication range and is the number of dimensions. This scaling is the
result of two competing effects when increasing : Increased schedule length
for interference-free transmission vs. the speed gain due to improved
connectivity. Hence, although one-dimensional networks can converge faster from
a greater communication range despite increased interference, the two effects
exactly offset one another in two-dimensions. In higher dimensions, increasing
the communication range can actually degrade the rate of convergence. Our
results thus underline the importance of factoring in the effect of
interference in the design of distributed estimation algorithms.Comment: 27 pages, 4 figure
- …