190 research outputs found

    Modeling and Controlling a Hybrid Multi-Agent based Microgrid in Presence of Different Physical and Cyber Components

    Get PDF
    This dissertation starts with modeling of two different and important parts of the distribution power systems, i.e. distribution line and photovoltaic (PV) systems. Firstly, it studies different approximation methods and develops a new approach for simplification of Carson\u27s equations to model distribution lines for unbalanced power flow and short circuit analysis. The results of applying the proposed method on a three-phase unbalanced distribution system are compared with different existing methods as well as actual impedance values obtained from numerical integration method. Then steady state modeling and optimal placing of multiple PV system are investigated in order to reduce the total loss in the system. The results show the effectiveness of the proposed method in minimizing the total loss in a distribution power system.;The dissertation starts the discussion about microgrid modeling and control by implementing a novel frequency control approach in a microgrid. This study has been carried out step by step by modeling different part of the power system and proposing different algorithms. Firstly, the application of Renewable Energy Sources (RES) accompanied with Energy Storage Systems (ESS) in a hybrid system is studied in the presence of Distributed Generation (DG) resources in Load Frequency Control (LFC) problem of microgrid power system with significant penetration of wind speed disturbances. The next step is to investigate the effect of PHEVs in modelling and controlling the microgid. Therefore, system with different penetrations of PHEVs and different stochastic behaviors of PHEVs is modeled. Different kinds of control approaches, including PI control as conventional method and proposed optimal LQR and dynamic programming methods, have been utilized and the results have been compared with each other. Then, Multi Agent System (MAS) is utilized as a control solution which contributes the cyber aspects of microgrid system. The modeled microgrid along with dynamic models of different components is implemented in a centralized multi-agent based structure. The robustness of the proposed controller has been tested against different frequency changes including cyber attack implications with different timing and severity. New attack detection through learning method is also proposed and tested. The results show improvement in frequency response of the microgrid system using the proposed control method and defense strategy against cyber attacks.;Finally, a new multi-agent based control method along with an advanced secondary voltage and frequency control using Particle Swarm Optimization (PSO) and Adaptive Dynamic Programming (ADP) is proposed and tested in the modeled microgrid considering nonlinear heterogeneous dynamic models of DGs. The results are shown and compared with conventional control approaches and different multi-agent structures. It is observed that the results are improved by using the new multi-agent structure and secondary control method.;In summary, contributions of this dissertation center in three main topics. Firstly, new accurate methods for modeling the distribution line impedance and PV system is developed. Then advanced control and defense strategy method for frequency regulation against cyber intrusions and load changes in a microgrid is proposed. Finally, a new hierarchical multi-agent based control algorithm is designed for secondary voltage and frequency control of the microgrid. (Abstract shortened by ProQuest.)

    The Modeling and Advanced Controller Design of Wind, PV and Battery Inverters

    Get PDF
    Renewable energies such as wind power and solar energy have become alternatives to fossil energy due to the improved energy security and sustainability. This trend leads to the rapid growth of wind and Photovoltaic (PV) farm installations worldwide. Power electronic equipments are commonly employed to interface the renewable energy generation with the grid. The intermittent nature of renewable and the large scale utilization of power electronic devices bring forth numerous challenges to system operation and design. Methods for studying and improving the operation of the interconnection of renewable energy such as wind and PV are proposed in this Ph.D. dissertation.;A multi-objective controller including is proposed for PV inverter to perform voltage flicker suppression, harmonic reduction and unbalance compensation. A novel supervisory control scheme is designed to coordinate PV and battery inverters to provide high quality power to the grid. This proposed control scheme provides a comprehensive solution to both active and reactive power issues caused by the intermittency of PV energy. A novel real-time experimental method for connecting physical PV panel and battery storage is proposed, and the proposed coordinated controller is tested in a Hardware in the Loop (HIL) experimental platform based on Real Time Digital Simulator (RTDS).;This work also explores the operation and controller design of a microgrid consisting of a direct drive wind generator and a battery storage system. A Model Predictive Control (MPC) strategy for the AC-DC-AC converter of wind system is derived and implemented to capture the maximum wind energy as well as provide desired reactive power. The MPC increases the accuracy of maximum wind energy capture as well as minimizes the power oscillations caused by varying wind speed. An advanced supervisory controller is presented and employed to ensure the power balance while regulating the PCC bus voltage within acceptable range in both grid-connected and islanded operation.;The high variability and uncertainty of renewable energies introduces unexpected fast power variation and hence the operation conditions continuously change in distribution networks. A three-layers advanced optimization and intelligent control algorithm for a microgrid with multiple renewable resources is proposed. A Dual Heuristic Programming (DHP) based system control layer is used to ensure the dynamic reliability and voltage stability of the entire microgrid as the system operation condition changes. A local layer maximizes the capability of the Photovoltaic (PV), wind power generators and battery systems, and a Model Predictive Control (MPC) based device layer increases the tracking accuracy of the converter control. The detail design of the proposed SWAPSC scheme are presented and tested on an IEEE 13 node feeder with a PV farm, a wind farm and two battery-based energy storage systems

    Reinforcement Learning and Its Applications in Modern Power and Energy Systems:A Review

    Get PDF

    Deep Reinforcement Learning for Control of Microgrids: A Review

    Get PDF
    A microgrid is widely accepted as a prominent solution to enhance resilience and performance in distributed power systems. Microgrids are flexible for adding distributed energy resources in the ecosystem of the electrical networks. Control techniques are used to synchronize distributed energy resources (DERs) due to their turbulent nature. DERs including alternating current, direct current and hybrid load with storage systems have been used in microgrids quite frequently due to which controlling the flow of energy in microgrids have been complex task with traditional control approaches. Distributed as well central approach to apply control algorithms is well-known methods to regulate frequency and voltage in microgrids. Recently techniques based of artificial intelligence are being applied for the problems that arise in operation and control of latest generation microgrids and smart grids. Such techniques are categorized in machine learning and deep learning in broader terms. The objective of this research is to survey the latest strategies of control in microgrids using the deep reinforcement learning approach (DRL). Other techniques of artificial intelligence had already been reviewed extensively but the use of DRL has increased in the past couple of years. To bridge the gap for the researchers, this survey paper is being presented with a focus on only Microgrids control DRL techniques for voltage control and frequency regulation with distributed, cooperative and multi agent approaches are presented in this research

    Fusion of Model-free Reinforcement Learning with Microgrid Control: Review and Vision

    Get PDF
    Challenges and opportunities coexist in microgrids as a result of emerging large-scale distributed energy resources (DERs) and advanced control techniques. In this paper, a comprehensive review of microgrid control is presented with its fusion of model-free reinforcement learning (MFRL). A high-level research map of microgrid control is developed from six distinct perspectives, followed by bottom-level modularized control blocks illustrating the configurations of grid-following (GFL) and grid-forming (GFM) inverters. Then, mainstream MFRL algorithms are introduced with an explanation of how MFRL can be integrated into the existing control framework. Next, the application guideline of MFRL is summarized with a discussion of three fusing approaches, i.e., model identification and parameter tuning, supplementary signal generation, and controller substitution, with the existing control framework. Finally, the fundamental challenges associated with adopting MFRL in microgrid control and corresponding insights for addressing these concerns are fully discussed.Comment: 14 pages, 4 figures, published on IEEE Transaction on Smart Grid 2022 Nov 15. See: https://ieeexplore-ieee-org.utk.idm.oclc.org/stamp/stamp.jsp?arnumber=995140

    Optimal energy management for a grid-tied solar PV-battery microgrid: A reinforcement learning approach

    Get PDF
    There has been a shift towards energy sustainability in recent years, and this shift should continue. The steady growth of energy demand because of population growth, as well as heightened worries about the number of anthropogenic gases released into the atmosphere and deployment of advanced grid technologies, has spurred the penetration of renewable energy resources (RERs) at different locations and scales in the power grid. As a result, the energy system is moving away from the centralized paradigm of large, controllable power plants and toward a decentralized network based on renewables. Microgrids, either grid-connected or islanded, provide a key solution for integrating RERs, load demand flexibility, and energy storage systems within this framework. Nonetheless, renewable energy resources, such as solar and wind energy, can be extremely stochastic as they are weather dependent. These resources coupled with load demand uncertainties lead to random variations on both the generation and load sides, thus challenging optimal energy management. This thesis develops an optimal energy management system (EMS) for a grid-tied solar PV-battery microgrid. The goal of the EMS is to obtain the minimum operational costs (cost of power exchange with the utility and battery wear cost) while still considering network constraints, which ensure grid violations are avoided. A reinforcement learning (RL) approach is proposed to minimize the operational cost of the microgrid under this stochastic setting. RL is a reward-motivated optimization technique derived from how animals learn to optimize their behaviour in new environments. Unlike other conventional model-based optimization approaches, RL doesn't need an explicit model of the optimization system to get optimal solutions. The EMS is modelled as a Markov Decision Process (MDP) to achieve optimality considering the state, action, and reward function. The feasibility of two RL algorithms, namely, conventional Q-learning algorithm and deep Q network algorithm, are developed, and their efficacy in performing optimal energy management for the designed system is evaluated in this thesis. First, the energy management problem is expressed as a sequential decision-making process, after which two algorithms, trading, and non-trading algorithm, are developed. In the trading algorithm case, excess microgrid's energy can be sold back to the utility to increase revenue, while in the latter case constraining rules are embedded in the designed EMS to ensure that no excess energy is sold back to the utility. Then a Q-learning algorithm is developed to minimize the operational cost of the microgrid under unknown future information. Finally, to evaluate the performance of the proposed EMS, a comparison study between a trading case EMS model and a non-trading case is performed using a typical commercial load curve and PV generation profile over a 24- hour horizon. Numerical simulation results indicated that the algorithm learned to select an optimized energy schedule that minimizes energy cost (cost of power purchased from the utility based on the time-varying tariff and battery wear cost) in both summer and winter case studies. However, comparing the non-trading EMS to the trading EMS model operational costs, the latter one decreased cost by 4.033% in the summer season and 2.199% in the winter season. Secondly, a deep Q network (DQN) method that uses recent learning algorithm enhancements, including experience replay and target network, is developed to learn the system uncertainties, including load demand, grid prices and volatile power supply from the renewables solve the optimal energy management problem. Unlike the Q-learning method, which updates the Q-function using a lookup table (which limits its scalability and overall performance in stochastic optimization), the DQN method uses a deep neural network that approximates the Q- function via statistical regression. The performance of the proposed method is evaluated with differently fluctuating load profiles, i.e., slow, medium, and fast. Simulation results substantiated the efficacy of the proposed method as the algorithm was established to learn from experience to raise the battery state of charge and optimally shift loads from a one-time instance, thus supporting the utility grid in reducing aggregate peak load. Furthermore, the performance of the proposed DQN approach was compared to the conventional Q-learning algorithm in terms of achieving a minimum global cost. Simulation results showed that the DQN algorithm outperformed the conventional Q-learning approach, reducing system operational costs by 15%, 24%, and 26% for the slow, medium, and fast fluctuating load profiles in the studied cases

    Reinforcement learning for power scheduling in a grid-tied pv-battery electric vehicles charging station

    Get PDF
    Grid-tied renewable energy sources (RES) based electric vehicle (EV) charging stations are an example of a distributed generator behind the meter system (DGBMS) which characterizes most modern power infrastructure. To perform power scheduling in such a DGBMS, stochastic variables such as load profile of the charging station, output profile of the RES and tariff profile of the utility must be considered at every decision step. The stochasticity in this kind of optimization environment makes power scheduling a challenging task that deserves substantial research attention. This dissertation investigates the application of reinforcement learning (RL) techniques in solving the power scheduling problem in a grid-tied PV-powered EV charging station with the incorporation of a battery energy storage system. RL is a reward-motivated optimization technique that was derived from the way animals learn to optimize their behavior in a new environment. Unlike other optimization methods such as numerical and soft computing techniques, RL does not require an accurate model of the optimization environment in order to arrive at an optimal solution. This study developed and evaluated the feasibility of two RL algorithms, namely, an asynchronous Q-learning algorithm and an advantage actor-critic (A2C) algorithm, in performing power scheduling in the EV charging station under static conditions. To assess the performances of the proposed algorithms, the conventional Q-learning and actor-critic algorithm were implemented to compare their global cost convergence and their learning characteristics. First, the power scheduling problem was expressed as a sequential decision-making process. Then an asynchronous Q-learning algorithm was developed to solve it. Further, an advantage actor-critic (A2C) algorithm was developed and was used to solve the power scheduling problem. The two algorithms were tested using a 24-hour load, generation and utility grid tariff profiles under static optimization conditions. The performance of the asynchronous Q-learning algorithm was compared with that of the conventional Q-learning method in terms of the global cost, stability and scalability. Likewise, the A2C was compared with the conventional actor-critic method in terms of stability, scalability and convergence. Simulation results showed that both the developed algorithms (asynchronous Q-learning algorithm and A2C) converged to lower global costs and displayed more stable learning characteristics than their conventional counterparts. This research established that proper restriction of the action-space of a Q-learning algorithm improves its stability and convergence. It was also observed that such a restriction may come with compromise of computational speed and scalability. Of the four algorithms analyzed, the A2C was found to produce a power schedule with the lowest global cost and the best usage of the battery energy storage system
    • …
    corecore