14 research outputs found

    Distributed Cyber-Attack Detection in the Secondary Control of DC Microgrids

    Get PDF
    The paper considers the problem of detecting cyber-attacks occurring in communication networks typically used in the secondary control layer of DC microgrids. The proposed distributed methodology allows for scalable monitoring of a microgrid and is able to detect the presence of data injection attacks in the communications among Distributed Generation Units (DGUs) - governed by consensus-based control - and isolate the communication link over which the attack is injected. Each local attack detector requires limited knowledge regarding the dynamics of its neighbors. Detectability properties of the method are analyzed, as well as a class of undetectable attacks. Some results from numerical simulation are presented to demonstrate the effectiveness of the proposed approach

    CURRENT TRENDS AND CHALLENGES IN DISTRIBUTED CONTROL SYSTEMS – AN OVERVIEW

    Get PDF
    In this paper, innovations in the field of distributed control systems have been considered. Without any claim for completeness, a short summary on current trends in this area has been provided. A special attention is paid to application of blockchain technologies in distributed control systems, game theoretical approach for distributed control applications, and advantages of distributed control for power systems. Also, one of the main issues of modern distributed control systems – cybersecurity has been considered

    Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments

    Full text link
    An autonomous and resilient controller is proposed for leader-follower multi-agent systems under uncertainties and cyber-physical attacks. The leader is assumed non-autonomous with a nonzero control input, which allows changing the team behavior or mission in response to environmental changes. A resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. An observer-based distributed H_infinity controller is first designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as to attenuate the effect of these attacks on the compromised agent itself. Non-homogeneous game algebraic Riccati equations are derived to solve the H_infinity optimal synchronization problem and off-policy reinforcement learning is utilized to learn their solution without requiring any knowledge of the agent's dynamics. A trust-confidence based distributed control protocol is then proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based solely on its local evidence. The proposed resilient reinforcement learning algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. Simulation results are provided to show the effectiveness of the proposed approach
    corecore