4,826 research outputs found

    Deep Reinforcement Learning for Control of Microgrids: A Review

    Get PDF
    A microgrid is widely accepted as a prominent solution to enhance resilience and performance in distributed power systems. Microgrids are flexible for adding distributed energy resources in the ecosystem of the electrical networks. Control techniques are used to synchronize distributed energy resources (DERs) due to their turbulent nature. DERs including alternating current, direct current and hybrid load with storage systems have been used in microgrids quite frequently due to which controlling the flow of energy in microgrids have been complex task with traditional control approaches. Distributed as well central approach to apply control algorithms is well-known methods to regulate frequency and voltage in microgrids. Recently techniques based of artificial intelligence are being applied for the problems that arise in operation and control of latest generation microgrids and smart grids. Such techniques are categorized in machine learning and deep learning in broader terms. The objective of this research is to survey the latest strategies of control in microgrids using the deep reinforcement learning approach (DRL). Other techniques of artificial intelligence had already been reviewed extensively but the use of DRL has increased in the past couple of years. To bridge the gap for the researchers, this survey paper is being presented with a focus on only Microgrids control DRL techniques for voltage control and frequency regulation with distributed, cooperative and multi agent approaches are presented in this research

    A Multi-Agent Deep Reinforcement Learning Based Voltage Regulation Using Coordinated PV Inverters

    Get PDF

    Reinforcement Learning and Its Applications in Modern Power and Energy Systems:A Review

    Get PDF

    Deep Reinforcement Learning for Distribution Network Operation and Electricity Market

    Full text link
    The conventional distribution network and electricity market operation have become challenging under complicated network operating conditions, due to emerging distributed electricity generations, coupled energy networks, and new market behaviours. These challenges include increasing dynamics and stochastics, and vast problem dimensions such as control points, measurements, and multiple objectives, etc. Previously the optimization models were often formulated as conventional programming problems and then solved mathematically, which could now become highly time-consuming or sometimes infeasible. On the other hand, with the recent advancement of artificial intelligence technologies, deep reinforcement learning (DRL) algorithms have demonstrated their excellent performances in various control and optimization fields. This indicates a potential alternative to address these challenges. In this thesis, DRL-based solutions for distribution network operation and electricity market have been investigated and proposed. Firstly, a DRL-based methodology is proposed for Volt/Var Control (VVC) optimization in a large distribution network, to effectively control bus voltages and reduce network power losses. Further, this thesis proposes a multi-agent (MA)DRL-based methodology under a complex regional coordinated VVC framework, and it can address spatial and temporal uncertainties. The DRL algorithm is also improved to adapt to the applications. Then, an integrated energy and heating systems (IEHS) optimization problem is solved by a MADRL-based methodology, where conventionally this could only be solved by simplifications or iterations. Beyond the applications in distribution network operation, a new electricity market service pricing method based on a DRL algorithm is also proposed. This DRL-based method has demonstrated good performance in this virtual storage rental service pricing problem, whereas this bi-level problem could hardly be solved directly due to a non-convex and non-continuous lower-level problem. These proposed methods have demonstrated advantageous performances under comprehensive case studies, and numerical simulation results have validated the effectiveness and high efficiency under different sophisticated operation conditions, solution robustness against temporal and spatial uncertainties, and optimality under large problem dimensions

    Fusion of Model-free Reinforcement Learning with Microgrid Control: Review and Vision

    Get PDF
    Challenges and opportunities coexist in microgrids as a result of emerging large-scale distributed energy resources (DERs) and advanced control techniques. In this paper, a comprehensive review of microgrid control is presented with its fusion of model-free reinforcement learning (MFRL). A high-level research map of microgrid control is developed from six distinct perspectives, followed by bottom-level modularized control blocks illustrating the configurations of grid-following (GFL) and grid-forming (GFM) inverters. Then, mainstream MFRL algorithms are introduced with an explanation of how MFRL can be integrated into the existing control framework. Next, the application guideline of MFRL is summarized with a discussion of three fusing approaches, i.e., model identification and parameter tuning, supplementary signal generation, and controller substitution, with the existing control framework. Finally, the fundamental challenges associated with adopting MFRL in microgrid control and corresponding insights for addressing these concerns are fully discussed.Comment: 14 pages, 4 figures, published on IEEE Transaction on Smart Grid 2022 Nov 15. See: https://ieeexplore-ieee-org.utk.idm.oclc.org/stamp/stamp.jsp?arnumber=995140

    Multiagent-Based Control for Plug-and-Play Batteries in DC Microgrids with Infrastructure Compensation

    Get PDF
    The influence of the DC infrastructure on the control of power-storage flow in micro- and smart grids has gained attention recently, particularly in dynamic vehicle-to-grid charging applications. Principal effects include the potential loss of the charge–discharge synchronization and the subsequent impact on the control stabilization, the increased degradation in batteries’ health/life, and resultant power- and energy-efficiency losses. This paper proposes and tests a candidate solution to compensate for the infrastructure effects in a DC microgrid with a varying number of heterogeneous battery storage systems in the context of a multiagent neighbor-to-neighbor control scheme. Specifically, the scheme regulates the balance of the batteries’ load-demand participation, with adaptive compensation for unknown and/or time-varying DC infrastructure influences. Simulation and hardware-in-the-loop studies in realistic conditions demonstrate the improved precision of the charge–discharge synchronization and the enhanced balance of the output voltage under 24 h excessively continuous variations in the load demand. In addition, immediate real-time compensation for the DC infrastructure influence can be attained with no need for initial estimates of key unknown parameters. The results provide both the validation and verification of the proposals under real operational conditions and expectations, including the dynamic switching of the heterogeneous batteries’ connection (plug-and-play) and the variable infrastructure influences of different dynamically switched branches. Key observed metrics include an average reduced convergence time (0.66–13.366%), enhanced output-voltage balance (2.637–3.24%), power-consumption reduction (3.569–4.93%), and power-flow-balance enhancement (2.755–6.468%), which can be achieved for the proposed scheme over a baseline for the experiments in question.</p

    Management of Distributed Energy Storage Systems for Provisioning of Power Network Services

    Full text link
    Because of environmentally friendly reasons and advanced technological development, a significant number of renewable energy sources (RESs) have been integrated into existing power networks. The increase in penetration and the uneven allocation of the RESs and load demands can lead to power quality issues and system instability in the power networks. Moreover, high penetration of the RESs can also cause low inertia due to a lack of rotational machines, leading to frequency instability. Consequently, the resilience, stability, and power quality of the power networks become exacerbated. This thesis proposes and develops new strategies for energy storage (ES) systems distributed in power networks for compensating for unbalanced active powers and supply-demand mismatches and improving power quality while taking the constraints of the ES into consideration. The thesis is mainly divided into two parts. In the first part, unbalanced active powers and supply-demand mismatch, caused by uneven allocation and distribution of rooftop PV units and load demands, are compensated by employing the distributed ES systems using novel frameworks based on distributed control systems and deep reinforcement learning approaches. There have been limited studies using distributed battery ES systems to mitigate the unbalanced active powers in three-phase four-wire and grounded power networks. Distributed control strategies are proposed to compensate for the unbalanced conditions. To group households in the same phase into the same cluster, algorithms based on feature states and labelled phase data are applied. Within each cluster, distributed dynamic active power balancing strategies are developed to control phase active powers to be close to the reference average phase power. Thus, phase active powers become balanced. To alleviate the supply-demand mismatch caused by high PV generation, a distributed active power control system is developed. The strategy consists of supply-demand mismatch and battery SoC balancing. Control parameters are designed by considering Hurwitz matrices and Lyapunov theory. The distributed ES systems can minimise the total mismatch of power generation and consumption so that reverse power flowing back to the main is decreased. Thus, voltage rise and voltage fluctuation are reduced. Furthermore, as a model-free approach, new frameworks based on Markov decision processes and Markov games are developed to compensate for unbalanced active powers. The frameworks require only proper design of states, action and reward functions, training, and testing with real data of PV generations and load demands. Dynamic models and control parameter designs are no longer required. The developed frameworks are then solved using the DDPG and MADDPG algorithms. In the second part, the distributed ES systems are employed to improve frequency, inertia, voltage, and active power allocation in both islanded AC and DC microgrids by novel decentralized control strategies. In an islanded DC datacentre microgrid, a novel decentralized control of heterogeneous ES systems is proposed. High- and low frequency components of datacentre loads are shared by ultracapacitors and batteries using virtual capacitive and virtual resistance droop controllers, respectively. A decentralized SoC balancing control is proposed to balance battery SoCs to a common value. The stability model ensures the ES devices operate within predefined limits. In an isolated AC microgrid, decentralized frequency control of distributed battery ES systems is proposed. The strategy includes adaptive frequency droop control based on current battery SoCs, virtual inertia control to improve frequency nadir and frequency restoration control to restore system frequency to its nominal value without being dependent on communication infrastructure. A small-signal model of the proposed strategy is developed for calculating control parameters. The proposed strategies in this thesis are verified using MATLAB/Simulink with Reinforcement Learning and Deep Learning Toolboxes and RTDS Technologies' real-time digital simulator with accurate power networks, switching levels of power electronic converters, and a nonlinear battery model
    • …
    corecore