496 research outputs found
Deep Reinforcement Learning for Control of Microgrids: A Review
A microgrid is widely accepted as a prominent solution to enhance resilience and performance in distributed power systems. Microgrids are flexible for adding distributed energy resources in the ecosystem of the electrical networks. Control techniques are used to synchronize distributed energy resources (DERs) due to their turbulent nature. DERs including alternating current, direct current and hybrid load with storage systems have been used in microgrids quite frequently due to which controlling the flow of energy in microgrids have been complex task with traditional control approaches. Distributed as well central approach to apply control algorithms is well-known methods to regulate frequency and voltage in microgrids. Recently techniques based of artificial intelligence are being applied for the problems that arise in operation and control of latest generation microgrids and smart grids. Such techniques are categorized in machine learning and deep learning in broader terms. The objective of this research is to survey the latest strategies of control in microgrids using the deep reinforcement learning approach (DRL). Other techniques of artificial intelligence had already been reviewed extensively but the use of DRL has increased in the past couple of years. To bridge the gap for the researchers, this survey paper is being presented with a focus on only Microgrids control DRL techniques for voltage control and frequency regulation with distributed, cooperative and multi agent approaches are presented in this research
Stability Constrained Reinforcement Learning for Real-Time Voltage Control in Distribution Systems
Deep Reinforcement Learning (RL) has been recognized as a promising tool to
address the challenges in real-time control of power systems. However, its
deployment in real-world power systems has been hindered by a lack of explicit
stability and safety guarantees. In this paper, we propose a stability
constrained reinforcement learning method for real-time voltage control in both
single-phase and three-phase distribution grids and we prove that the proposed
approach provides a voltage stability guarantee. The key idea underlying our
approach is an explicitly constructed Lyapunov function that certifies
stability. We demonstrate the effectiveness of our approach with both
single-phase and three-phase IEEE test feeders, where the proposed method can
reduce the transient control cost by more than 25\% and shorten the response
time by 21.5\% on average compared to the widely used linear policy, while
always achieving voltage stability. In contrast, standard RL methods often fail
to achieve voltage stability.Comment: arXiv admin note: text overlap with arXiv:2109.1485
Recommended from our members
Advanced Optimization and Data-Driven Control in Smart Grid
The power grids are continuously evolving over the past decades, where new challenges and opportunities are embraced at the same time. On one hand, the penetration of renewable generations and other distributed energy resources (DER) is growing rapidly, whose different generation and control patterns could significantly impact the daily operation. On the other hand, the new communication, monitoring and regulating devices are gradually installed, which enable more control abilities of the generations, demands, and grids, and the feasibility to deploy more sophisticated control schemes.To leverage the new technique and overcome the new challenges in the smart girds, different optimization and control problems need to be solved for different roles including the system operator, demand, and financial traders. For the system operators, it is critical to maximizing the total social welfare while satisfying the operational constraints. To better coordinate the DER and improve the efficiency of distribution systems, the three-phase optimal power flow (OPF) problem algorithms are developed including the DCOPF algorithm for robustness and the ACOPF algorithm for optimality. Moreover, the deep reinforcement learning-based Volt-VAR control schemes are proposed to better maintain the voltage stability and electricity service quality.For demands resources, minimizing their energy bills will satisfy the energy needs is always their goal. Providing ancillary services by proactively adjusting their total demand is one of the potential choices. Through the provision of the services, the demands can not only receiving incentives from the system operators but also help to improve the reliability and stability of power grids. We develop control schemes specifically for the data centers to provide the phase balancing service in the distribution system and the frequency regulation service in the transmission system. The financial traders, it is desired to maximize their total profits. A better trading strategy with a more accurate forecast model can help increase the traders' gain and further improve the price convergence of the electricity market. Our machine learning based trading framework outperforms the existing approach and lays the foundation for market efficiency evaluation across the markets
Reinforcement Learning Based Robust Volt/Var Control in Active Distribution Networks With Imprecisely Known Delay
Active distribution networks (ADNs) incorporating massive photovoltaic (PV)
devices encounter challenges of rapid voltage fluctuations and potential
violations. Due to the fluctuation and intermittency of PV generation, the
state gap, arising from time-inconsistent states and exacerbated by imprecisely
known system delays, significantly impacts the accuracy of voltage control.
This paper addresses this challenge by introducing a framework for delay
adaptive Volt/Var control (VVC) in the presence of imprecisely known system
delays to regulate the reactive power of PV inverters. The proposed approach
formulates the voltage control, based on predicted system operation states, as
a robust VVC problem. It employs sample selection from the state prediction
interval to promptly identify the worst-performing system operation state.
Furthermore, we leverage the decentralized partially observable Markov decision
process (Dec-POMDP) to reformulate the robust VVC problem. We design Multiple
Policy Networks and employ Multiple Policy Networks and Reward Shaping-based
Multi-agent Twin Delayed Deep Deterministic Policy Gradient (MPNRS-MATD3)
algorithm to efficiently address and solve the Dec-POMDP model-based problem.
Simulation results show the delay adaption characteristic of our proposed
framework, and the MPNRS-MATD3 outperforms other multi-agent reinforcement
learning algorithms in robust voltage control
- …