2,121 research outputs found
Diffusion Adaptation over Networks under Imperfect Information Exchange and Non-stationary Data
Adaptive networks rely on in-network and collaborative processing among
distributed agents to deliver enhanced performance in estimation and inference
tasks. Information is exchanged among the nodes, usually over noisy links. The
combination weights that are used by the nodes to fuse information from their
neighbors play a critical role in influencing the adaptation and tracking
abilities of the network. This paper first investigates the mean-square
performance of general adaptive diffusion algorithms in the presence of various
sources of imperfect information exchanges, quantization errors, and model
non-stationarities. Among other results, the analysis reveals that link noise
over the regression data modifies the dynamics of the network evolution in a
distinct way, and leads to biased estimates in steady-state. The analysis also
reveals how the network mean-square performance is dependent on the combination
weights. We use these observations to show how the combination weights can be
optimized and adapted. Simulation results illustrate the theoretical findings
and match well with theory.Comment: 36 pages, 7 figures, to appear in IEEE Transactions on Signal
Processing, June 201
Decentralized Multi-Subgroup Formation Control With Connectivity Preservation and Collision Avoidance
This paper proposes a formation control algorithm to create separated multiple formations for an undirected networked multi-agent system while preserving the network connectivity and avoiding collision among agents. Through the modified multi-consensus technique, the proposed algorithm can simultaneously divide a group of multiple agents into any arbitrary number of desired formations in a decentralized manner. Furthermore, the agents assigned to each formation group can be easily reallocated to other formation groups without network topological constraints as long as the entire network is initially connected; an operator can freely partition agents even if there is no spanning tree within each subgroup. Besides, the system can avoid collision without loosing the connectivity even during the transient period of formation by applying the existing potential function based on the network connectivity estimation. If the estimation is correct, the potential function not only guarantees the connectivity maintenance but also allows some extra edges to be broken if the network remains connected. Numerical simulations are performed to verify the feasibility and performance of the proposed multi-subgroup formation control
Diffusion Adaptation Strategies for Distributed Optimization and Learning over Networks
We propose an adaptive diffusion mechanism to optimize a global cost function
in a distributed manner over a network of nodes. The cost function is assumed
to consist of a collection of individual components. Diffusion adaptation
allows the nodes to cooperate and diffuse information in real-time; it also
helps alleviate the effects of stochastic gradient noise and measurement noise
through a continuous learning process. We analyze the mean-square-error
performance of the algorithm in some detail, including its transient and
steady-state behavior. We also apply the diffusion algorithm to two problems:
distributed estimation with sparse parameters and distributed localization.
Compared to well-studied incremental methods, diffusion methods do not require
the use of a cyclic path over the nodes and are robust to node and link
failure. Diffusion methods also endow networks with adaptation abilities that
enable the individual nodes to continue learning even when the cost function
changes with time. Examples involving such dynamic cost functions with moving
targets are common in the context of biological networks.Comment: 34 pages, 6 figures, to appear in IEEE Transactions on Signal
Processing, 201
ARES:Adaptive receding-horizon synthesis of optimal plans
We introduce ARES, an efficient approximation algorithm for generating optimal plans (action sequences) that take an initial state of a Markov Decision Process (MDP) to a state whose cost is below a specified (convergence) threshold. ARES uses Particle Swarm Optimization, with adaptive sizing for both the receding horizon and the particle swarm. Inspired by Importance Splitting, the length of the horizon and the number of particles are chosen such that at least one particle reaches a next-level state, that is, a state where the cost decreases by a required delta from the previous-level state. The level relation on states and the plans constructed by ARES implicitly define a Lyapunov function and an optimal policy, respectively, both of which could be explicitly generated by applying ARES to all states of the MDP, up to some topological equivalence relation. We also assess the effectiveness of ARES by statistically evaluating its rate of success in generating optimal plans. The ARES algorithm resulted from our desire to clarify if flying in V-formation is a flocking policy that optimizes energy conservation, clear view, and velocity alignment. That is, we were interested to see if one could find optimal plans that bring a flock from an arbitrary initial state to a state exhibiting a single connected V-formation. For flocks with 7 birds, ARES is able to generate a plan that leads to a V-formation in 95% of the 8,000 random initial configurations within 63 s, on average. ARES can also be easily customized into a model-predictive controller (MPC) with an adaptive receding horizon and statistical guarantees of convergence. To the best of our knowledge, our adaptive-sizing approach is the first to provide convergence guarantees in receding-horizon techniques
Diffusion Adaptation Strategies for Distributed Estimation over Gaussian Markov Random Fields
The aim of this paper is to propose diffusion strategies for distributed
estimation over adaptive networks, assuming the presence of spatially
correlated measurements distributed according to a Gaussian Markov random field
(GMRF) model. The proposed methods incorporate prior information about the
statistical dependency among observations, while at the same time processing
data in real-time and in a fully decentralized manner. A detailed mean-square
analysis is carried out in order to prove stability and evaluate the
steady-state performance of the proposed strategies. Finally, we also
illustrate how the proposed techniques can be easily extended in order to
incorporate thresholding operators for sparsity recovery applications.
Numerical results show the potential advantages of using such techniques for
distributed learning in adaptive networks deployed over GMRF.Comment: Submitted to IEEE Transactions on Signal Processing. arXiv admin
note: text overlap with arXiv:1206.309
Diffusion Strategies Outperform Consensus Strategies for Distributed Estimation over Adaptive Networks
Adaptive networks consist of a collection of nodes with adaptation and
learning abilities. The nodes interact with each other on a local level and
diffuse information across the network to solve estimation and inference tasks
in a distributed manner. In this work, we compare the mean-square performance
of two main strategies for distributed estimation over networks: consensus
strategies and diffusion strategies. The analysis in the paper confirms that
under constant step-sizes, diffusion strategies allow information to diffuse
more thoroughly through the network and this property has a favorable effect on
the evolution of the network: diffusion networks are shown to converge faster
and reach lower mean-square deviation than consensus networks, and their
mean-square stability is insensitive to the choice of the combination weights.
In contrast, and surprisingly, it is shown that consensus networks can become
unstable even if all the individual nodes are stable and able to solve the
estimation task on their own. When this occurs, cooperation over the network
leads to a catastrophic failure of the estimation task. This phenomenon does
not occur for diffusion networks: we show that stability of the individual
nodes always ensures stability of the diffusion network irrespective of the
combination topology. Simulation results support the theoretical findings.Comment: 37 pages, 7 figures, To appear in IEEE Transactions on Signal
Processing, 201
- âŠ