103,250 research outputs found

    Accelerated Multi-Stage Discrete Time Dynamic Average Consensus

    Get PDF
    This paper presents a novel solution for the discrete time dynamic average consensus problem. Given a set of time-varying input signals over the nodes of an undirected graph, the proposed algorithm tracks, at each node, the input signals’ average. The algorithm is based on a sequence of consensus stages combined with a second order diffusive protocol. The former overcomes the need of k-th order differences of the inputs and conservation of the network state average, while the latter overcomes the trade-off between speed and accuracy of the consensus stages by just storing the previous estimate at each node. The result is a protocol that is fast, arbitrarily accurate, and robust against input noises and initializations. The protocol is extended to an asynchronous and randomized version that follows a gossiping scheme that is robust against potential delays and packet losses. We study the convergence properties of the algorithms and validate them via simulations

    Bibliographic Review on Distributed Kalman Filtering

    Get PDF
    In recent years, a compelling need has arisen to understand the effects of distributed information structures on estimation and filtering. In this paper, a bibliographical review on distributed Kalman filtering (DKF) is provided.\ud The paper contains a classification of different approaches and methods involved to DKF. The applications of DKF are also discussed and explained separately. A comparison of different approaches is briefly carried out. Focuses on the contemporary research are also addressed with emphasis on the practical applications of the techniques. An exhaustive list of publications, linked directly or indirectly to DKF in the open literature, is compiled to provide an overall picture of different developing aspects of this area

    Performance analysis with network-enhanced complexities: On fading measurements, event-triggered mechanisms, and cyber attacks

    Get PDF
    Copyright © 2014 Derui Ding et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Nowadays, the real-world systems are usually subject to various complexities such as parameter uncertainties, time-delays, and nonlinear disturbances. For networked systems, especially large-scale systems such as multiagent systems and systems over sensor networks, the complexities are inevitably enhanced in terms of their degrees or intensities because of the usage of the communication networks. Therefore, it would be interesting to (1) examine how this kind of network-enhanced complexities affects the control or filtering performance; and (2) develop some suitable approaches for controller/filter design problems. In this paper, we aim to survey some recent advances on the performance analysis and synthesis with three sorts of fashionable network-enhanced complexities, namely, fading measurements, event-triggered mechanisms, and attack behaviors of adversaries. First, these three kinds of complexities are introduced in detail according to their engineering backgrounds, dynamical characteristic, and modelling techniques. Then, the developments of the performance analysis and synthesis issues for various networked systems are systematically reviewed. Furthermore, some challenges are illustrated by using a thorough literature review and some possible future research directions are highlighted.This work was supported in part by the National Natural Science Foundation of China under Grants 61134009, 61329301, 61203139, 61374127, and 61374010, the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany

    QDQD-Learning: A Collaborative Distributed Strategy for Multi-Agent Reinforcement Learning Through Consensus + Innovations

    Full text link
    The paper considers a class of multi-agent Markov decision processes (MDPs), in which the network agents respond differently (as manifested by the instantaneous one-stage random costs) to a global controlled state and the control actions of a remote controller. The paper investigates a distributed reinforcement learning setup with no prior information on the global state transition and local agent cost statistics. Specifically, with the agents' objective consisting of minimizing a network-averaged infinite horizon discounted cost, the paper proposes a distributed version of QQ-learning, QD\mathcal{QD}-learning, in which the network agents collaborate by means of local processing and mutual information exchange over a sparse (possibly stochastic) communication network to achieve the network goal. Under the assumption that each agent is only aware of its local online cost data and the inter-agent communication network is \emph{weakly} connected, the proposed distributed scheme is almost surely (a.s.) shown to yield asymptotically the desired value function and the optimal stationary control policy at each network agent. The analytical techniques developed in the paper to address the mixed time-scale stochastic dynamics of the \emph{consensus + innovations} form, which arise as a result of the proposed interactive distributed scheme, are of independent interest.Comment: Submitted to the IEEE Transactions on Signal Processing, 33 page
    • …
    corecore