496 research outputs found

    Deep Reinforcement Learning for Control of Microgrids: A Review

    Get PDF
    A microgrid is widely accepted as a prominent solution to enhance resilience and performance in distributed power systems. Microgrids are flexible for adding distributed energy resources in the ecosystem of the electrical networks. Control techniques are used to synchronize distributed energy resources (DERs) due to their turbulent nature. DERs including alternating current, direct current and hybrid load with storage systems have been used in microgrids quite frequently due to which controlling the flow of energy in microgrids have been complex task with traditional control approaches. Distributed as well central approach to apply control algorithms is well-known methods to regulate frequency and voltage in microgrids. Recently techniques based of artificial intelligence are being applied for the problems that arise in operation and control of latest generation microgrids and smart grids. Such techniques are categorized in machine learning and deep learning in broader terms. The objective of this research is to survey the latest strategies of control in microgrids using the deep reinforcement learning approach (DRL). Other techniques of artificial intelligence had already been reviewed extensively but the use of DRL has increased in the past couple of years. To bridge the gap for the researchers, this survey paper is being presented with a focus on only Microgrids control DRL techniques for voltage control and frequency regulation with distributed, cooperative and multi agent approaches are presented in this research

    Stability Constrained Reinforcement Learning for Real-Time Voltage Control in Distribution Systems

    Full text link
    Deep Reinforcement Learning (RL) has been recognized as a promising tool to address the challenges in real-time control of power systems. However, its deployment in real-world power systems has been hindered by a lack of explicit stability and safety guarantees. In this paper, we propose a stability constrained reinforcement learning method for real-time voltage control in both single-phase and three-phase distribution grids and we prove that the proposed approach provides a voltage stability guarantee. The key idea underlying our approach is an explicitly constructed Lyapunov function that certifies stability. We demonstrate the effectiveness of our approach with both single-phase and three-phase IEEE test feeders, where the proposed method can reduce the transient control cost by more than 25\% and shorten the response time by 21.5\% on average compared to the widely used linear policy, while always achieving voltage stability. In contrast, standard RL methods often fail to achieve voltage stability.Comment: arXiv admin note: text overlap with arXiv:2109.1485

    Reinforcement Learning and Its Applications in Modern Power and Energy Systems:A Review

    Get PDF

    Reinforcement Learning Based Robust Volt/Var Control in Active Distribution Networks With Imprecisely Known Delay

    Full text link
    Active distribution networks (ADNs) incorporating massive photovoltaic (PV) devices encounter challenges of rapid voltage fluctuations and potential violations. Due to the fluctuation and intermittency of PV generation, the state gap, arising from time-inconsistent states and exacerbated by imprecisely known system delays, significantly impacts the accuracy of voltage control. This paper addresses this challenge by introducing a framework for delay adaptive Volt/Var control (VVC) in the presence of imprecisely known system delays to regulate the reactive power of PV inverters. The proposed approach formulates the voltage control, based on predicted system operation states, as a robust VVC problem. It employs sample selection from the state prediction interval to promptly identify the worst-performing system operation state. Furthermore, we leverage the decentralized partially observable Markov decision process (Dec-POMDP) to reformulate the robust VVC problem. We design Multiple Policy Networks and employ Multiple Policy Networks and Reward Shaping-based Multi-agent Twin Delayed Deep Deterministic Policy Gradient (MPNRS-MATD3) algorithm to efficiently address and solve the Dec-POMDP model-based problem. Simulation results show the delay adaption characteristic of our proposed framework, and the MPNRS-MATD3 outperforms other multi-agent reinforcement learning algorithms in robust voltage control
    • …
    corecore