7,443 research outputs found

    Deep Reinforcement Learning for Distribution Network Operation and Electricity Market

    Full text link
    The conventional distribution network and electricity market operation have become challenging under complicated network operating conditions, due to emerging distributed electricity generations, coupled energy networks, and new market behaviours. These challenges include increasing dynamics and stochastics, and vast problem dimensions such as control points, measurements, and multiple objectives, etc. Previously the optimization models were often formulated as conventional programming problems and then solved mathematically, which could now become highly time-consuming or sometimes infeasible. On the other hand, with the recent advancement of artificial intelligence technologies, deep reinforcement learning (DRL) algorithms have demonstrated their excellent performances in various control and optimization fields. This indicates a potential alternative to address these challenges. In this thesis, DRL-based solutions for distribution network operation and electricity market have been investigated and proposed. Firstly, a DRL-based methodology is proposed for Volt/Var Control (VVC) optimization in a large distribution network, to effectively control bus voltages and reduce network power losses. Further, this thesis proposes a multi-agent (MA)DRL-based methodology under a complex regional coordinated VVC framework, and it can address spatial and temporal uncertainties. The DRL algorithm is also improved to adapt to the applications. Then, an integrated energy and heating systems (IEHS) optimization problem is solved by a MADRL-based methodology, where conventionally this could only be solved by simplifications or iterations. Beyond the applications in distribution network operation, a new electricity market service pricing method based on a DRL algorithm is also proposed. This DRL-based method has demonstrated good performance in this virtual storage rental service pricing problem, whereas this bi-level problem could hardly be solved directly due to a non-convex and non-continuous lower-level problem. These proposed methods have demonstrated advantageous performances under comprehensive case studies, and numerical simulation results have validated the effectiveness and high efficiency under different sophisticated operation conditions, solution robustness against temporal and spatial uncertainties, and optimality under large problem dimensions

    Resilience-driven planning and operation of networked microgrids featuring decentralisation and flexibility

    Get PDF
    High-impact and low-probability extreme events including both man-made events and natural weather events can cause severe damage to power systems. These events are typically rare but featured in long duration and large scale. Many research efforts have been conducted on the resilience enhancement of modern power systems. In recent years, microgrids (MGs) with distributed energy resources (DERs) including both conventional generation resources and renewable energy sources provide a viable solution for the resilience enhancement of such multi-energy systems during extreme events. More specifically, several islanded MGs after extreme events can be connected with each other as a cluster, which has the advantage of significantly reducing load shedding through energy sharing among them. On the other hand, mobile power sources (MPSs) such as mobile energy storage systems (MESSs), electric vehicles (EVs), and mobile emergency generators (MEGs) have been gradually deployed in current energy systems for resilience enhancement due to their significant advantages on mobility and flexibility. Given such a context, a literature review on resilience-driven planning and operation problems featuring MGs is presented in detail, while research limitations are summarised briefly. Then, this thesis investigates how to develop appropriate planning and operation models for the resilience enhancement of networked MGs via different types of DERs (e.g., MGs, ESSs, EVs, MESSs, etc.). This research is conducted in the following application scenarios: 1. This thesis proposes novel operation strategies for hybrid AC/DC MGs and networked MGs towards resilience enhancement. Three modelling approaches including centralised control, hierarchical control, and distributed control have been applied to formulate the proposed operation problems. A detailed non-linear AC OPF algorithm is employed to model each MG capturing all the network and technical constraints relating to stability properties (e.g., voltage limits, active and reactive power flow limits, and power losses), while uncertainties associated with renewable energy sources and load profiles are incorporated into the proposed models via stochastic programming. Impacts of limited generation resources, load distinction intro critical and non-critical, and severe contingencies (e.g., multiple line outages) are appropriately captured to mimic a realistic scenario. 2. This thesis introduces MPSs (e.g., EVs and MESSs) into the suggested networked MGs against the severe contingencies caused by extreme events. Specifically, time-coupled routing and scheduling characteristics of MPSs inside each MG are modelled to reduce load shedding when large damage is caused to each MG during extreme events. Both transportation networks and power networks are considered in the proposed models, while transporting time of MPSs between different transportation nodes is also appropriately captured. 3. This thesis focuses on developing realistic planning models for the optimal sizing problem of networked MGs capturing a trade-off between resilience and cost, while both internal uncertainties and external contingencies are considered in the suggested three-level planning model. Additionally, a resilience-driven planning model is developed to solve the coupled optimal sizing and pre-positioning problem of MESSs in the context of decentralised networked MGs. Internal uncertainties are captured in the model via stochastic programming, while external contingencies are included through the three-level structure. 4. This thesis investigates the application of artificial intelligence techniques to power system operations. Specifically, a model-free multi-agent reinforcement learning (MARL) approach is proposed for the coordinated routing and scheduling problem of multiple MESSs towards resilience enhancement. The parameterized double deep Q-network method (P-DDQN) is employed to capture a hybrid policy including both discrete and continuous actions. A coupled power-transportation network featuring a linearised AC OPF algorithm is realised as the environment, while uncertainties associated with renewable energy sources, load profiles, line outages, and traffic volumes are incorporated into the proposed data-driven approach through the learning procedure.Open Acces

    Reinforcement Learning and Its Applications in Modern Power and Energy Systems:A Review

    Get PDF

    A Review on Application of Artificial Intelligence Techniques in Microgrids

    Get PDF
    A microgrid can be formed by the integration of different components such as loads, renewable/conventional units, and energy storage systems in a local area. Microgrids with the advantages of being flexible, environmentally friendly, and self-sufficient can improve the power system performance metrics such as resiliency and reliability. However, design and implementation of microgrids are always faced with different challenges considering the uncertainties associated with loads and renewable energy resources (RERs), sudden load variations, energy management of several energy resources, etc. Therefore, it is required to employ such rapid and accurate methods, as artificial intelligence (AI) techniques, to address these challenges and improve the MG's efficiency, stability, security, and reliability. Utilization of AI helps to develop systems as intelligent as humans to learn, decide, and solve problems. This paper presents a review on different applications of AI-based techniques in microgrids such as energy management, load and generation forecasting, protection, power electronics control, and cyber security. Different AI tasks such as regression and classification in microgrids are discussed using methods including machine learning, artificial neural networks, fuzzy logic, support vector machines, etc. The advantages, limitation, and future trends of AI applications in microgrids are discussed.©2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.fi=vertaisarvioitu|en=peerReviewed

    Applications of Probabilistic Forecasting in Smart Grids : A Review

    Get PDF
    This paper reviews the recent studies and works dealing with probabilistic forecasting models and their applications in smart grids. According to these studies, this paper tries to introduce a roadmap towards decision-making under uncertainty in a smart grid environment. In this way, it firstly discusses the common methods employed to predict the distribution of variables. Then, it reviews how the recent literature used these forecasting methods and for which uncertain parameters they wanted to obtain distributions. Unlike the existing reviews, this paper assesses several uncertain parameters for which probabilistic forecasting models have been developed. In the next stage, this paper provides an overview related to scenario generation of uncertain parameters using their distributions and how these scenarios are adopted for optimal decision-making. In this regard, this paper discusses three types of optimization problems aiming to capture uncertainties and reviews the related papers. Finally, we propose some future applications of probabilistic forecasting based on the flexibility challenges of power systems in the near future.© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).fi=vertaisarvioitu|en=peerReviewed

    Energy Harvesting and Energy Storage Systems

    Get PDF
    This book discuss the recent developments in energy harvesting and energy storage systems. Sustainable development systems are based on three pillars: economic development, environmental stewardship, and social equity. One of the guiding principles for finding the balance between these pillars is to limit the use of non-renewable energy sources

    Optimal energy management for a grid-tied solar PV-battery microgrid: A reinforcement learning approach

    Get PDF
    There has been a shift towards energy sustainability in recent years, and this shift should continue. The steady growth of energy demand because of population growth, as well as heightened worries about the number of anthropogenic gases released into the atmosphere and deployment of advanced grid technologies, has spurred the penetration of renewable energy resources (RERs) at different locations and scales in the power grid. As a result, the energy system is moving away from the centralized paradigm of large, controllable power plants and toward a decentralized network based on renewables. Microgrids, either grid-connected or islanded, provide a key solution for integrating RERs, load demand flexibility, and energy storage systems within this framework. Nonetheless, renewable energy resources, such as solar and wind energy, can be extremely stochastic as they are weather dependent. These resources coupled with load demand uncertainties lead to random variations on both the generation and load sides, thus challenging optimal energy management. This thesis develops an optimal energy management system (EMS) for a grid-tied solar PV-battery microgrid. The goal of the EMS is to obtain the minimum operational costs (cost of power exchange with the utility and battery wear cost) while still considering network constraints, which ensure grid violations are avoided. A reinforcement learning (RL) approach is proposed to minimize the operational cost of the microgrid under this stochastic setting. RL is a reward-motivated optimization technique derived from how animals learn to optimize their behaviour in new environments. Unlike other conventional model-based optimization approaches, RL doesn't need an explicit model of the optimization system to get optimal solutions. The EMS is modelled as a Markov Decision Process (MDP) to achieve optimality considering the state, action, and reward function. The feasibility of two RL algorithms, namely, conventional Q-learning algorithm and deep Q network algorithm, are developed, and their efficacy in performing optimal energy management for the designed system is evaluated in this thesis. First, the energy management problem is expressed as a sequential decision-making process, after which two algorithms, trading, and non-trading algorithm, are developed. In the trading algorithm case, excess microgrid's energy can be sold back to the utility to increase revenue, while in the latter case constraining rules are embedded in the designed EMS to ensure that no excess energy is sold back to the utility. Then a Q-learning algorithm is developed to minimize the operational cost of the microgrid under unknown future information. Finally, to evaluate the performance of the proposed EMS, a comparison study between a trading case EMS model and a non-trading case is performed using a typical commercial load curve and PV generation profile over a 24- hour horizon. Numerical simulation results indicated that the algorithm learned to select an optimized energy schedule that minimizes energy cost (cost of power purchased from the utility based on the time-varying tariff and battery wear cost) in both summer and winter case studies. However, comparing the non-trading EMS to the trading EMS model operational costs, the latter one decreased cost by 4.033% in the summer season and 2.199% in the winter season. Secondly, a deep Q network (DQN) method that uses recent learning algorithm enhancements, including experience replay and target network, is developed to learn the system uncertainties, including load demand, grid prices and volatile power supply from the renewables solve the optimal energy management problem. Unlike the Q-learning method, which updates the Q-function using a lookup table (which limits its scalability and overall performance in stochastic optimization), the DQN method uses a deep neural network that approximates the Q- function via statistical regression. The performance of the proposed method is evaluated with differently fluctuating load profiles, i.e., slow, medium, and fast. Simulation results substantiated the efficacy of the proposed method as the algorithm was established to learn from experience to raise the battery state of charge and optimally shift loads from a one-time instance, thus supporting the utility grid in reducing aggregate peak load. Furthermore, the performance of the proposed DQN approach was compared to the conventional Q-learning algorithm in terms of achieving a minimum global cost. Simulation results showed that the DQN algorithm outperformed the conventional Q-learning approach, reducing system operational costs by 15%, 24%, and 26% for the slow, medium, and fast fluctuating load profiles in the studied cases
    corecore