4,638 research outputs found

    Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments

    Full text link
    An autonomous and resilient controller is proposed for leader-follower multi-agent systems under uncertainties and cyber-physical attacks. The leader is assumed non-autonomous with a nonzero control input, which allows changing the team behavior or mission in response to environmental changes. A resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. An observer-based distributed H_infinity controller is first designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as to attenuate the effect of these attacks on the compromised agent itself. Non-homogeneous game algebraic Riccati equations are derived to solve the H_infinity optimal synchronization problem and off-policy reinforcement learning is utilized to learn their solution without requiring any knowledge of the agent's dynamics. A trust-confidence based distributed control protocol is then proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based solely on its local evidence. The proposed resilient reinforcement learning algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. Simulation results are provided to show the effectiveness of the proposed approach

    Scalable Approach to Uncertainty Quantification and Robust Design of Interconnected Dynamical Systems

    Full text link
    Development of robust dynamical systems and networks such as autonomous aircraft systems capable of accomplishing complex missions faces challenges due to the dynamically evolving uncertainties coming from model uncertainties, necessity to operate in a hostile cluttered urban environment, and the distributed and dynamic nature of the communication and computation resources. Model-based robust design is difficult because of the complexity of the hybrid dynamic models including continuous vehicle dynamics, the discrete models of computations and communications, and the size of the problem. We will overview recent advances in methodology and tools to model, analyze, and design robust autonomous aerospace systems operating in uncertain environment, with stress on efficient uncertainty quantification and robust design using the case studies of the mission including model-based target tracking and search, and trajectory planning in uncertain urban environment. To show that the methodology is generally applicable to uncertain dynamical systems, we will also show examples of application of the new methods to efficient uncertainty quantification of energy usage in buildings, and stability assessment of interconnected power networks

    On Iterative Learning in Multi-agent Systems Coordination and Control

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Distributed Model Reference Adaptive Control for Vehicle Platoons with Uncertain Dynamics

    Get PDF
    This paper proposes a distributed model reference adaptive controller (DMRAC) for vehicle platoons with constant spacing policy, subjected to uncertainty in control effectiveness and inertial time lag. It formulates the uncertain vehicle dynamics as a matched uncertainty, and is applicable for both directed and undirected topologies. The directed topology must contain at least one spanning tree with the leader as a root node, while the undirected topology must be static and connected with at least one follower receiving information from the leader. The proposed control structure consists of a reference model and a main control system. The reference model is a closed-loop system constructed from the nominal model of each follower vehicle and a reference control signal. The main control system consists of a nominal control signal based on cooperative state feedback and an adaptive term. The nominal control signal allows the followers cooperatively track the leader, while the adaptive term suppresses the effects of uncertainties. Stability analysis shows that global tracking errors with respect to the reference model and with respect to the leader are asymptotically stable. The states of all followers synchronize to both the reference and leader states. Moreover, with the existence of unknown external disturbances, the global tracking errors remain uniformly ultimately bounded. The performance of the controlled system is verified through the simulations and validates the efficacy of the proposed controller

    Distributed MPC for coordinated energy efficiency utilization in microgrid systems

    Full text link
    To improve the renewable energy utilization of distributed microgrid systems, this paper presents an optimal distributed model predictive control strategy to coordinate energy management among microgrid systems. In particular, through information exchange among systems, each microgrid in the network, which includes renewable generation, storage systems, and some controllable loads, can maintain its own systemwide supply and demand balance. With our mechanism, the closed-loop stability of the distributed microgrid systems can be guaranteed. In addition, we provide evaluation criteria of renewable energy utilization to validate our proposed method. Simulations show that the supply demand balance in each microgrid is achieved while, at the same time, the system operation cost is reduced, which demonstrates the effectiveness and efficiency of our proposed policy.Accepted manuscrip

    Consensus in the Presence of Multiple Opinion Leaders: Effect of Bounded Confidence

    Full text link
    The problem of analyzing the performance of networked agents exchanging evidence in a dynamic network has recently grown in importance. This problem has relevance in signal and data fusion network applications and in studying opinion and consensus dynamics in social networks. Due to its capability of handling a wider variety of uncertainties and ambiguities associated with evidence, we use the framework of Dempster-Shafer (DS) theory to capture the opinion of an agent. We then examine the consensus among agents in dynamic networks in which an agent can utilize either a cautious or receptive updating strategy. In particular, we examine the case of bounded confidence updating where an agent exchanges its opinion only with neighboring nodes possessing 'similar' evidence. In a fusion network, this captures the case in which nodes only update their state based on evidence consistent with the node's own evidence. In opinion dynamics, this captures the notions of Social Judgment Theory (SJT) in which agents update their opinions only with other agents possessing opinions closer to their own. Focusing on the two special DS theoretic cases where an agent state is modeled as a Dirichlet body of evidence and a probability mass function (p.m.f.), we utilize results from matrix theory, graph theory, and networks to prove the existence of consensus agent states in several time-varying network cases of interest. For example, we show the existence of a consensus in which a subset of network nodes achieves a consensus that is adopted by follower network nodes. Of particular interest is the case of multiple opinion leaders, where we show that the agents do not reach a consensus in general, but rather converge to 'opinion clusters'. Simulation results are provided to illustrate the main results.Comment: IEEE Transactions on Signal and Information Processing Over Networks, to appea
    corecore