4 research outputs found

    Deep Learning-Based Average Consensus

    Full text link
    In this study, we analyzed the problem of accelerating the linear average consensus algorithm for complex networks. We propose a data-driven approach to tuning the weights of temporal (i.e., time-varying) networks using deep learning techniques. Given a finite-time window, the proposed approach first unfolds the linear average consensus protocol to obtain a feedforward signal-flow graph, which is regarded as a neural network. The edge weights of the obtained neural network are then trained using standard deep learning techniques to minimize consensus error over a given finite-time window. Through this training process, we obtain a set of optimized time-varying weights, which yield faster consensus for a complex network. We also demonstrate that the proposed approach can be extended for infinite-time window problems. Numerical experiments revealed that our approach can achieve a significantly smaller consensus error compared to baseline strategies

    Disturbance Propagation in Interconnected Linear Dynamical Networks

    Get PDF
    We consider performance analysis of interconnected linear dynamical networks subject to external stochastic disturbances. For stable linear networks, we define scalar performance measures by considering weighted H2--norms of the underlying systems, which are defined from the disturbance input to a desired output. It is shown that the performance measure of a general stable linear network can be tightly bounded from above and below using some spectral functions of the state matrix of the network. This result is applied to a class of cyclic linear networks and shown that the performance measure of such networks scales quadratically with the network size. Next, we focus on first-- and second--order linear consensus networks and introduce the notion of Laplacian energy for such networks, which in fact measures the expected steady-state dispersion of the state of the entire network. We develop a graph-theoretic framework in order to relate graph characteristics to the Laplacian energy of the network and show that how the Laplacian energy scales asymptotically with the network size. We quantify several inherent fundamental limits on Laplacian energy in terms of graph diameter, node degrees, and the number of spanning trees, and several other graph specifications. Particularly we characterize several versions of fundamental tradeoffs between Laplacian energy and sparsity measures of a linear consensus network, showing that more sparse networks have higher levels of Laplacian energies. At the end, we show that several existing performance measures in real--world applications, such as total power loss in synchronous power networks and flock energy of a group of autonomous vehicles in a formation, are indeed special forms of Laplacian energies

    Improving convergence rate of distributed consensus through asymmetric weights

    No full text
    corecore