155 research outputs found
A randomized gossip consensus algorithm on convex metric spaces
A consensus problem consists of a group of dynamic agents who seek to agree upon certain quantities of
interest. This problem can be generalized in the context of convex metric spaces that extend the standard notion
of convexity. In this paper we introduce and analyze a randomized gossip algorithm for solving the generalized
consensus problem on convex metric spaces. We study the convergence properties of the algorithm using stochastic
differential equations theory. We show that the dynamics of the distances between the states of the agents can be
upper bounded by the dynamics of a stochastic differential equation driven by Poisson counters. In addition, we
introduce instances of the generalized consensus algorithm for several examples of convex metric spaces together
with numerical simulations.This material is based in part upon work supported by the NIST-ARRA Measurement Science and Engineering Fellowship Program
award 70NANB10H026, through the University of Maryland, and in part upon work supported by the Army Research Office award number
W911NF-08-1-0238 to Ohio State University
GENERALIZED DISTRIBUTED CONSENSUS-BASED ALGORITHMS FOR UNCERTAIN SYSTEMS AND NETWORKS
We address four problems related to multi-agent optimization, filtering and agreement. First, we investigate collaborative optimization of an objective function expressed as a sum of local convex functions, when the agents make decisions in a distributed manner using local information, while the communication topology used to exchange messages and information is modeled by a graph-valued random process, assumed independent and identically distributed. Specifically, we study the performance of the consensusbased multi-agent distributed subgradient method and show how it depends on the probability distribution of the random graph. For the case of a constant stepsize, we first give
an upper bound on the difference between the objective function, evaluated at the agents' estimates of the optimal decision vector, and the optimal value. In addition, for a particular
class of convex functions, we give an upper bound on the distances between the agents' estimates of the optimal decision vector and the minimizer and we provide the rate of convergence to zero of the time varying component of the aforementioned upper
bound. The addressed metrics are evaluated via their expected values. As an application, we show how the distributed optimization algorithm can be used to perform collaborative system identification and provide numerical experiments under the randomized and broadcast gossip protocols.
Second, we generalize the asymptotic consensus problem to convex metric spaces. Under minimal connectivity assumptions, we show that if at each iteration an agent updates its state by choosing a point from a particular subset of the generalized convex hull generated by the agents current state and the states of its neighbors, then agreement is achieved asymptotically. In addition, we give bounds on the distance between the consensus point(s) and the initial values of the agents. As an application example, we introduce a probabilistic algorithm for reaching consensus of opinion and show that it in fact fits our general framework.
Third, we discuss the linear asymptotic consensus problem for a network of dynamic agents whose communication network is modeled by a randomly switching graph. The switching is determined by a finite state, Markov process, each topology corresponding to a state of the process. We address both the cases where the dynamics of the agents are expressed in continuous and discrete time. We show that, if the consensus matrices are doubly stochastic, average consensus is achieved in the mean square and almost sure senses if and only if the graph resulting from the union of graphs corresponding to the states of the Markov process is strongly connected.
Fourth, we address the consensus-based distributed linear filtering problem, where a discrete time, linear stochastic process is observed by a network of sensors. We assume that the consensus weights are known and we first provide sufficient conditions under
which the stochastic process is detectable, i.e. for a specific choice of consensus weights there exists a set of filtering gains such that the dynamics of the estimation errors (without noise) are asymptotically stable. Next, we develop a distributed, sub-optimal filtering scheme based on minimizing an upper bound on a quadratic filtering cost. In the stationary case, we provide sufficient conditions under which this scheme converges; conditions
expressed in terms of the convergence properties of a set of coupled Riccati equations. We continue by presenting a connection between the consensus-based distributed linear filter and the optimal linear filter of a Markovian jump linear system, appropriately defined. More specifically, we show that if the Markovian jump linear system is (mean square) detectable, then the stochastic process is detectable under the consensus-based distributed linear filtering scheme. We also show that the optimal gains of a linear filter for estimating the state of a Markovian jump linear system, appropriately defined, can be used to approximate the optimal gains of the consensus-based linear filter
A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization
Based on the idea of randomized coordinate descent of -averaged
operators, a randomized primal-dual optimization algorithm is introduced, where
a random subset of coordinates is updated at each iteration. The algorithm
builds upon a variant of a recent (deterministic) algorithm proposed by V\~u
and Condat that includes the well known ADMM as a particular case. The obtained
algorithm is used to solve asynchronously a distributed optimization problem. A
network of agents, each having a separate cost function containing a
differentiable term, seek to find a consensus on the minimum of the aggregate
objective. The method yields an algorithm where at each iteration, a random
subset of agents wake up, update their local estimates, exchange some data with
their neighbors, and go idle. Numerical results demonstrate the attractive
performance of the method. The general approach can be naturally adapted to
other situations where coordinate descent convex optimization algorithms are
used with a random choice of the coordinates.Comment: 10 page
Gossip and Distributed Kalman Filtering: Weak Consensus under Weak Detectability
The paper presents the gossip interactive Kalman filter (GIKF) for
distributed Kalman filtering for networked systems and sensor networks, where
inter-sensor communication and observations occur at the same time-scale. The
communication among sensors is random; each sensor occasionally exchanges its
filtering state information with a neighbor depending on the availability of
the appropriate network link. We show that under a weak distributed
detectability condition:
1. the GIKF error process remains stochastically bounded, irrespective of the
instability properties of the random process dynamics; and
2. the network achieves \emph{weak consensus}, i.e., the conditional
estimation error covariance at a (uniformly) randomly selected sensor converges
in distribution to a unique invariant measure on the space of positive
semi-definite matrices (independent of the initial state.)
To prove these results, we interpret the filtered states (estimates and error
covariances) at each node in the GIKF as stochastic particles with local
interactions. We analyze the asymptotic properties of the error process by
studying as a random dynamical system the associated switched (random) Riccati
equation, the switching being dictated by a non-stationary Markov chain on the
network graph.Comment: Submitted to the IEEE Transactions, 30 pages
Stochastic gradient descent on Riemannian manifolds
Stochastic gradient descent is a simple approach to find the local minima of
a cost function whose evaluations are corrupted by noise. In this paper, we
develop a procedure extending stochastic gradient descent algorithms to the
case where the function is defined on a Riemannian manifold. We prove that, as
in the Euclidian case, the gradient descent algorithm converges to a critical
point of the cost function. The algorithm has numerous potential applications,
and is illustrated here by four examples. In particular a novel gossip
algorithm on the set of covariance matrices is derived and tested numerically.Comment: A slightly shorter version has been published in IEEE Transactions
Automatic Contro
Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms with Directed Gossip Communication
We study distributed optimization in networked systems, where nodes cooperate
to find the optimal quantity of common interest, x=x^\star. The objective
function of the corresponding optimization problem is the sum of private (known
only by a node,) convex, nodes' objectives and each node imposes a private
convex constraint on the allowed values of x. We solve this problem for generic
connected network topologies with asymmetric random link failures with a novel
distributed, decentralized algorithm. We refer to this algorithm as AL-G
(augmented Lagrangian gossiping,) and to its variants as AL-MG (augmented
Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast
gossiping.) The AL-G algorithm is based on the augmented Lagrangian dual
function. Dual variables are updated by the standard method of multipliers, at
a slow time scale. To update the primal variables, we propose a novel,
Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses
unidirectional gossip communication, only between immediate neighbors in the
network and is resilient to random link failures. For networks with reliable
communication (i.e., no failures,) the simplified, AL-BG (augmented Lagrangian
broadcast gossiping) algorithm reduces communication, computation and data
storage cost. We prove convergence for all proposed algorithms and demonstrate
by simulations the effectiveness on two applications: l_1-regularized logistic
regression for classification and cooperative spectrum sensing for cognitive
radio networks.Comment: 28 pages, journal; revise
- …