589 research outputs found
Multi-Microgrid Collaborative Optimization Scheduling Using an Improved Multi-Agent Soft Actor-Critic Algorithm
The implementation of a multi-microgrid (MMG) system with multiple renewable
energy sources enables the facilitation of electricity trading. To tackle the
energy management problem of a MMG system, which consists of multiple renewable
energy microgrids belonging to different operating entities, this paper
proposes a MMG collaborative optimization scheduling model based on a
multi-agent centralized training distributed execution framework. To enhance
the generalization ability of dealing with various uncertainties, we also
propose an improved multi-agent soft actor-critic (MASAC) algorithm, which
facilitates en-ergy transactions between multi-agents in MMG, and employs
automated machine learning (AutoML) to optimize the MASAC hyperparameters to
further improve the generalization of deep reinforcement learning (DRL). The
test results demonstrate that the proposed method successfully achieves power
complementarity between different entities, and reduces the MMG system
operating cost. Additionally, the proposal significantly outperforms other
state-of-the-art reinforcement learning algorithms with better economy and
higher calculation efficiency.Comment: Accepted by Energie
Attributes of Big Data Analytics for Data-Driven Decision Making in Cyber-Physical Power Systems
Big data analytics is a virtually new term in power system terminology. This concept delves into the way a massive volume of data is acquired, processed, analyzed to extract insight from available data. In particular, big data analytics alludes to applications of artificial intelligence, machine learning techniques, data mining techniques, time-series forecasting methods. Decision-makers in power systems have been long plagued by incapability and weakness of classical methods in dealing with large-scale real practical cases due to the existence of thousands or millions of variables, being time-consuming, the requirement of a high computation burden, divergence of results, unjustifiable errors, and poor accuracy of the model. Big data analytics is an ongoing topic, which pinpoints how to extract insights from these large data sets. The extant article has enumerated the applications of big data analytics in future power systems through several layers from grid-scale to local-scale. Big data analytics has many applications in the areas of smart grid implementation, electricity markets, execution of collaborative operation schemes, enhancement of microgrid operation autonomy, management of electric vehicle operations in smart grids, active distribution network control, district hub system management, multi-agent energy systems, electricity theft detection, stability and security assessment by PMUs, and better exploitation of renewable energy sources. The employment of big data analytics entails some prerequisites, such as the proliferation of IoT-enabled devices, easily-accessible cloud space, blockchain, etc. This paper has comprehensively conducted an extensive review of the applications of big data analytics along with the prevailing challenges and solutions
Contingency-constrained economic dispatch with safe reinforcement learning
Future power systems will rely heavily on micro grids with a high share of
decentralised renewable energy sources and energy storage systems. The high
complexity and uncertainty in this context might make conventional power
dispatch strategies infeasible. Reinforcement-learning based (RL) controllers
can address this challenge, however, cannot themselves provide safety
guarantees, preventing their deployment in practice. To overcome this
limitation, we propose a formally validated RL controller for economic
dispatch. We extend conventional constraints by a time-dependent constraint
encoding the islanding contingency. The contingency constraint is computed
using set-based backwards reachability analysis and actions of the RL agent are
verified through a safety layer. Unsafe actions are projected into the safe
action space while leveraging constrained zonotope set representations for
computational efficiency. The developed approach is demonstrated on a
residential use case using real-world measurements
Correlated Deep Q-learning based Microgrid Energy Management
Microgrid (MG) energy management is an important part of MG operation.
Various entities are generally involved in the energy management of an MG,
e.g., energy storage system (ESS), renewable energy resources (RER) and the
load of users, and it is crucial to coordinate these entities. Considering the
significant potential of machine learning techniques, this paper proposes a
correlated deep Q-learning (CDQN) based technique for the MG energy management.
Each electrical entity is modeled as an agent which has a neural network to
predict its own Q-values, after which the correlated Q-equilibrium is used to
coordinate the operation among agents. In this paper, the Long Short Term
Memory networks (LSTM) based deep Q-learning algorithm is introduced and the
correlated equilibrium is proposed to coordinate agents. The simulation result
shows 40.9% and 9.62% higher profit for ESS agent and photovoltaic (PV) agent,
respectively.Comment: Accepted by 2020 IEEE 25th International Workshop on CAMAD,
978-1-7281-6339-0/20/$31.00 \copyright 2020 IEE
A multi-agent privacy-preserving energy management framework for renewable networked microgrids
This paper proposes a fully distributed scheme to solve the day-ahead optimal power scheduling of networked microgrids in the presence of different renewable energy resources, such as photovoltaics and wind turbines, considering energy storage systems. The proposed method enables the optimization of the power scheduling problem through local computation of agents in the system and private communication between existing agents, without any centralized scheduling unit. In this paper, a cloud-fog-based framework is also introduced as a fast and economical infrastructure for the proposed distributed method. The suggested optimized energy framework proposes an area to regulate and update policies, detect misbehaving elements, and execute punishments centrally, while the general power scheduling problem is optimized in a distributed manner using the proposed method. The suggested cloud-fog-based method eliminates the need to invest in local databases and computing systems. The proposed scheme is examined on a small-scale microgrid and also a larger test networked microgrid, including 4 microgrids and 15 areas in a 24-h time period, to illustrate the scalability, convergence, and accuracy of the framework. The simulation results substantiate the fast and precise performance of the proposed framework for networked microgrids compared with other existing centralized and distributed methods.© 2023 The Authors. IET Generation, Transmission & Distribution published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.fi=vertaisarvioitu|en=peerReviewed
Modelling and Simulation Approaches for Local Energy Community Integrated Distribution Networks
Due to the absence of studies of local energy communities (LECs) where the grid is represented, it is very difficult to infer implications of increased LEC integration for the distribution grid as well as for the wider society. Therefore, this paper aims to investigate holistic modelling and simulation approaches of LECs. To conduct a quantifiable assessment of different control architectures, LEC types and market frameworks, a flexible and comprehensive LEC modelling and simulation approach is needed. Modelling LECs and the environment they operate in involves a holistic approach consisting of different layers: market, controller, and grid. The controller layer is relevant both for the overall energy management system of the LEC and the controllers of single components in a LEC. In this paper, the different LEC modelling approaches in the reviewed literature are presented, several multilayered concepts for LECs are proposed, and a case study is presented to illustrate a holistic simulation where the different layers interact.Modelling and Simulation Approaches for Local Energy Community Integrated Distribution NetworkspublishedVersio
Optimal energy management for a grid-tied solar PV-battery microgrid: A reinforcement learning approach
There has been a shift towards energy sustainability in recent years, and this shift should continue. The steady growth of energy demand because of population growth, as well as heightened worries about the number of anthropogenic gases released into the atmosphere and deployment of advanced grid technologies, has spurred the penetration of renewable energy resources (RERs) at different locations and scales in the power grid. As a result, the energy system is moving away from the centralized paradigm of large, controllable power plants and toward a decentralized network based on renewables. Microgrids, either grid-connected or islanded, provide a key solution for integrating RERs, load demand flexibility, and energy storage systems within this framework. Nonetheless, renewable energy resources, such as solar and wind energy, can be extremely stochastic as they are weather dependent. These resources coupled with load demand uncertainties lead to random variations on both the generation and load sides, thus challenging optimal energy management. This thesis develops an optimal energy management system (EMS) for a grid-tied solar PV-battery microgrid. The goal of the EMS is to obtain the minimum operational costs (cost of power exchange with the utility and battery wear cost) while still considering network constraints, which ensure grid violations are avoided. A reinforcement learning (RL) approach is proposed to minimize the operational cost of the microgrid under this stochastic setting. RL is a reward-motivated optimization technique derived from how animals learn to optimize their behaviour in new environments. Unlike other conventional model-based optimization approaches, RL doesn't need an explicit model of the optimization system to get optimal solutions. The EMS is modelled as a Markov Decision Process (MDP) to achieve optimality considering the state, action, and reward function. The feasibility of two RL algorithms, namely, conventional Q-learning algorithm and deep Q network algorithm, are developed, and their efficacy in performing optimal energy management for the designed system is evaluated in this thesis. First, the energy management problem is expressed as a sequential decision-making process, after which two algorithms, trading, and non-trading algorithm, are developed. In the trading algorithm case, excess microgrid's energy can be sold back to the utility to increase revenue, while in the latter case constraining rules are embedded in the designed EMS to ensure that no excess energy is sold back to the utility. Then a Q-learning algorithm is developed to minimize the operational cost of the microgrid under unknown future information. Finally, to evaluate the performance of the proposed EMS, a comparison study between a trading case EMS model and a non-trading case is performed using a typical commercial load curve and PV generation profile over a 24- hour horizon. Numerical simulation results indicated that the algorithm learned to select an optimized energy schedule that minimizes energy cost (cost of power purchased from the utility based on the time-varying tariff and battery wear cost) in both summer and winter case studies. However, comparing the non-trading EMS to the trading EMS model operational costs, the latter one decreased cost by 4.033% in the summer season and 2.199% in the winter season. Secondly, a deep Q network (DQN) method that uses recent learning algorithm enhancements, including experience replay and target network, is developed to learn the system uncertainties, including load demand, grid prices and volatile power supply from the renewables solve the optimal energy management problem. Unlike the Q-learning method, which updates the Q-function using a lookup table (which limits its scalability and overall performance in stochastic optimization), the DQN method uses a deep neural network that approximates the Q- function via statistical regression. The performance of the proposed method is evaluated with differently fluctuating load profiles, i.e., slow, medium, and fast. Simulation results substantiated the efficacy of the proposed method as the algorithm was established to learn from experience to raise the battery state of charge and optimally shift loads from a one-time instance, thus supporting the utility grid in reducing aggregate peak load. Furthermore, the performance of the proposed DQN approach was compared to the conventional Q-learning algorithm in terms of achieving a minimum global cost. Simulation results showed that the DQN algorithm outperformed the conventional Q-learning approach, reducing system operational costs by 15%, 24%, and 26% for the slow, medium, and fast fluctuating load profiles in the studied cases
- …