915 research outputs found

    Consensus of Multi-Agent Networks in the Presence of Adversaries Using Only Local Information

    Full text link
    This paper addresses the problem of resilient consensus in the presence of misbehaving nodes. Although it is typical to assume knowledge of at least some nonlocal information when studying secure and fault-tolerant consensus algorithms, this assumption is not suitable for large-scale dynamic networks. To remedy this, we emphasize the use of local strategies to deal with resilience to security breaches. We study a consensus protocol that uses only local information and we consider worst-case security breaches, where the compromised nodes have full knowledge of the network and the intentions of the other nodes. We provide necessary and sufficient conditions for the normal nodes to reach consensus despite the influence of the malicious nodes under different threat assumptions. These conditions are stated in terms of a novel graph-theoretic property referred to as network robustness.Comment: This report contains the proofs of the results presented at HiCoNS 201

    Static consensus in passifiable linear networks

    Get PDF
    Sufficient conditions of consensus (synchronization) in networks described by digraphs and consisting of identical determenistic SIMO systems are derived. Identical and nonidentical control gains (positive arc weights) are considered. Connection between admissible digraphs and nonsmooth hypersurfaces (sufficient gain boundary) is established. Necessary and sufficient conditions for static consensus by output feedback in networks consisting of certain class of double integrators are rediscovered. Scalability for circle digraph in terms of gain magnitudes is studied. Examples and results of numerical simulations are presented.Comment: 13 pages, 5 figure

    Broadcast Gossip Algorithms for Consensus on Strongly Connected Digraphs

    Full text link
    We study a general framework for broadcast gossip algorithms which use companion variables to solve the average consensus problem. Each node maintains an initial state and a companion variable. Iterative updates are performed asynchronously whereby one random node broadcasts its current state and companion variable and all other nodes receiving the broadcast update their state and companion variable. We provide conditions under which this scheme is guaranteed to converge to a consensus solution, where all nodes have the same limiting values, on any strongly connected directed graph. Under stronger conditions, which are reasonable when the underlying communication graph is undirected, we guarantee that the consensus value is equal to the average, both in expectation and in the mean-squared sense. Our analysis uses tools from non-negative matrix theory and perturbation theory. The perturbation results rely on a parameter being sufficiently small. We characterize the allowable upper bound as well as the optimal setting for the perturbation parameter as a function of the network topology, and this allows us to characterize the worst-case rate of convergence. Simulations illustrate that, in comparison to existing broadcast gossip algorithms, the approaches proposed in this paper have the advantage that they simultaneously can be guaranteed to converge to the average consensus and they converge in a small number of broadcasts.Comment: 30 pages, submitte

    Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication

    Full text link
    This paper proposes a novel class of distributed continuous-time coordination algorithms to solve network optimization problems whose cost function is a sum of local cost functions associated to the individual agents. We establish the exponential convergence of the proposed algorithm under (i) strongly connected and weight-balanced digraph topologies when the local costs are strongly convex with globally Lipschitz gradients, and (ii) connected graph topologies when the local costs are strongly convex with locally Lipschitz gradients. When the local cost functions are convex and the global cost function is strictly convex, we establish asymptotic convergence under connected graph topologies. We also characterize the algorithm's correctness under time-varying interaction topologies and study its privacy preservation properties. Motivated by practical considerations, we analyze the algorithm implementation with discrete-time communication. We provide an upper bound on the stepsize that guarantees exponential convergence over connected graphs for implementations with periodic communication. Building on this result, we design a provably-correct centralized event-triggered communication scheme that is free of Zeno behavior. Finally, we develop a distributed, asynchronous event-triggered communication scheme that is also free of Zeno with asymptotic convergence guarantees. Several simulations illustrate our results.Comment: 12 page

    Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging

    Full text link
    In this paper, we study distributed big-data nonconvex optimization in multi-agent networks. We consider the (constrained) minimization of the sum of a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a convex (possibly) nonsmooth regularizer. Our interest is in big-data problems wherein there is a large number of variables to optimize. If treated by means of standard distributed optimization algorithms, these large-scale problems may be intractable, due to the prohibitive local computation and communication burden at each node. We propose a novel distributed solution method whereby at each iteration agents optimize and then communicate (in an uncoordinated fashion) only a subset of their decision variables. To deal with non-convexity of the cost function, the novel scheme hinges on Successive Convex Approximation (SCA) techniques coupled with i) a tracking mechanism instrumental to locally estimate gradient averages; and ii) a novel block-wise consensus-based protocol to perform local block-averaging operations and gradient tacking. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Finally, numerical results show the effectiveness of the proposed algorithm and highlight how the block dimension impacts on the communication overhead and practical convergence speed
    corecore