36 research outputs found
Deep Reinforcement Learning for the Control of Energy Storage in Grid-Scale and Microgrid Applications
The European and worldwide directives and targets for renewable energy integration, motivated by the imminent need to decarbonize the electricity sector, are imposing severe changes to the conventional electrical power system. The inherent unpredictability of the instantaneous energy production from variable renewable energy sources (VRES) is expected to make the reliable and secure operation of the system, a challenging task. Flexibility, and in particular, energy storage is expected to assume a key role in the integration of large shares of VRES in the power system, and thus, in the transition towards a carbon-free electricity sector. One of the main storage mechanisms that can facilitate the integration of VRES is energy arbitrage, i.e. the transfer of electrical energy from a period of low demand to another period of high demand. In this thesis, we investigate and develop novel operating strategies for maximizing the value of energy arbitrage from storage units at different scales (i.e. grid-scale or distributed) and in different settings (i.e. interconnected or off-grid). The decision-making process of an operator optimizing the energy arbitrage value of storage is an inherently complex problem, mainly due to uncertainties induced by: i) the stochasticity of market prices and ii) the variability of renewable generation. In view of the great successes of deep reinforcement learning (DRL) in solving challenging tasks, the goal of this thesis is to investigate its potential in solving problems related to the control of storage in modern energy systems.
Firstly, we address the energy arbitrage problem of a storage unit that participates in the European Continuous Intraday (CID) market. We develop an operational strategy in order to maximize its arbitrage value. A novel modeling framework for the strategic participation of energy storage in the European CID market is proposed, where exchanges occur through a process similar to the stock market. A detailed description of the market mechanism and the storage system management is provided. A set of necessary simplifications that constitutes the problem tractable are described. The resulting problem is solved using a state-of-the-art DRL algorithm. The outcome of the proposed method is compared with the state-of-the-art industrial practices and the resulting policy is found able to outperform this benchmark.
Secondly, we address the energy arbitrage problem faced by an off-grid microgrid operator in the context of rural electrification. In particular, we propose a novel model-based reinforcement learning algorithm that is able to control the storage device in order to accommodate the different changes that might occur over the microgrid lifetime. The algorithm demonstrates generalisation properties, transfer capabilities and better robustness in case of fast-changing system dynamics. The proposed algorithm is compared against two benchmarks, namely a rule-based and a model predictive controller (MPC). The results show that the trained agent is able to outperform both benchmarks in the lifelong setting where the system dynamics are changing over time.
In the context of an off grid-microgrid, the optimal size of the components (i.e. the capacity of photovoltaic (PV) panels, storage) depends heavily on the control policy applied. In this thesis, we propose a new methodology for jointly sizing a system and designing its control law that is based on reinforcement learning. The objective of the optimization problem is to jointly find a control policy and an environment over the joint hypothesis space of parameters such that the sum of the initial investment and the operational cost are minimized. The optimization problem is then addressed by generalizing the direct policy search algorithms to an algorithm we call Direct Environment Search with (projected stochastic) Gradient Ascent (DESGA). We illustrate the performance of DESGA on two benchmarks. First, we consider a parametrized space of Mass-Spring-Damper (MSD) environments and control policies. Then, we use our algorithm for optimizing the size of the components and the operation of a small-scale autonomous energy system, i.e. a solar off-grid microgrid, composed of photovoltaic panels, batteries. On both benchmarks, we show that DESGA results in a set of parameters for which the expected return is nearly equal to its theoretical upper-bound.
Finally, we provide the general conclusions and remarks of this thesis and we propose a list of future research directions that emerge as an outcome of this work
Lifelong Control of Off-grid Microgrid with Model Based Reinforcement Learning
The lifelong control problem of an off-grid microgrid is composed of two
tasks, namely estimation of the condition of the microgrid devices and
operational planning accounting for the uncertainties by forecasting the future
consumption and the renewable production. The main challenge for the effective
control arises from the various changes that take place over time. In this
paper, we present an open-source reinforcement framework for the modeling of an
off-grid microgrid for rural electrification. The lifelong control problem of
an isolated microgrid is formulated as a Markov Decision Process (MDP). We
categorize the set of changes that can occur in progressive and abrupt changes.
We propose a novel model based reinforcement learning algorithm that is able to
address both types of changes. In particular the proposed algorithm
demonstrates generalisation properties, transfer capabilities and better
robustness in case of fast-changing system dynamics. The proposed algorithm is
compared against a rule-based policy and a model predictive controller with
look-ahead. The results show that the trained agent is able to outperform both
benchmarks in the lifelong setting where the system dynamics are changing over
time
Learning optimal environments using projected stochastic gradient ascent
In this work, we propose a new methodology for jointly sizing a dynamical
system and designing its control law. First, the problem is formalized by
considering parametrized reinforcement learning environments and parametrized
policies. The objective of the optimization problem is to jointly find a
control policy and an environment over the joint hypothesis space of parameters
such that the sum of rewards gathered by the policy in this environment is
maximal. The optimization problem is then addressed by generalizing the direct
policy search algorithms to an algorithm we call Direct Environment Search with
(projected stochastic) Gradient Ascent (DESGA). We illustrate the performance
of DESGA on two benchmarks. First, we consider a parametrized space of
Mass-Spring-Damper (MSD) environments and control policies. Then, we use our
algorithm for optimizing the size of the components and the operation of a
small-scale autonomous energy system, i.e. a solar off-grid microgrid, composed
of photovoltaic panels, batteries, etc. On both benchmarks, we compare the
results of the execution of DESGA with a theoretical upper-bound on the
expected return. Furthermore, the performance of DESGA is compared to an
alternative algorithm. The latter performs a grid discretization of the
environment's hypothesis space and applies the REINFORCE algorithm to identify
pairs of environments and policies resulting in a high expected return. The
choice of this algorithm is also discussed and motivated. On both benchmarks,
we show that DESGA and the alternative algorithm result in a set of parameters
for which the expected return is nearly equal to its theoretical upper-bound.
Nevertheless, the execution of DESGA is much less computationally costly
Lifelong control of off-grid microgrid with model-based reinforcement learning
peer reviewedOff-grid microgrids are receiving a growing interest for rural electrification purposes in developing countries due to their ability to ensure affordable, sustainable and reliable energy services. Off-grid microgrids rely on renewable energy sources (RES) coupled with storage systems to supply the electrical consumption. The inherent uncertainty introduced by RES as well as the stochastic nature of the electrical demand in rural contexts pose significant challenges to the efficient control of off-grid microgrids throughout their entire life span. In this paper, we address the lifelong control problem of an isolated microgrid. We categorize the set of changes that may occur over its life span in progressive and abrupt changes. We propose a novel model-based reinforcement learning algorithm that is able to address both types of changes. In particular, the proposed algorithm demonstrates generalisation properties, transfer capabilities and better robustness in case of fast-changing system dynamics. The proposed algorithm is compared against a rule-based policy and a model predictive controller with look-ahead. The results show that the trained agent is able to outperform both benchmarks in the lifelong setting where the system dynamics are changing over time. © 2021 Elsevier Lt
Allocation of locally generated electricity in renewable energy communities
This paper introduces a methodology to perform an ex-post allocation of
locally generated electricity among the members of a renewable energy
community. Such an ex-post allocation takes place in a settlement phase where
the financial exchanges of the community are based on the production and
consumption profiles of each member. The proposed methodology consists of an
optimisation framework which (i) minimises the sum of individual electricity
costs of the community members, and (ii) can enforce minimum self-sufficiency
rates --proportion of electricity consumption covered by local production-- on
each member, enhancing the economic gains of some of them. The latter
capability aims to ensure that members receive enough incentives to participate
in the renewable energy community. This framework is designed so as to provide
a practical approach that is ready to use by community managers, which is
compliant with current legislation on renewable energy communities. It computes
a set of optimal repartition keys, which represent the percentage of total
local production given to each member -- one key per metering period per
member. These keys are computed based on an initial set of keys provided in the
simulation, which are typically contractual i.e., agreed upon between the
member and the manager the renewable energy community. This methodology is
tested in a broad range of scenarios, illustrating its ability to optimise the
operational costs of a renewable energy community.Comment: 8 pages, 6 figures, 4 tables, submitted to IEEE Transactions on Power
System
Exploiting the flexibility potential of water distribution networks: A pilot project in Belgium
peer reviewedFlexibility, and in particular, energy storage is expected to assume a key role in the efficient and secure operation of the power system, and thus, in the transition towards a carbonfree electricity sector. In this paper, we propose a methodology for exploiting the flexibility existing in water distribution systems from water storage in reservoirs. The methodology relies first on a modelling approach, from which an optimization problem is
defined. The resolution of this optimization problem leads to an operating pattern for the pumps. The methodology assumes that all the electricity is bought on the day-ahead market, where the bids are placed by constructing and solving an optimization problem. The uncertain water consumption and the electricity market prices are predicted using machine learning techniques.
The methodology is tested on a real-life water distribution network in Belgium and the results.7. Affordable and clean energ
Real-Time Bidding Strategies from Micro-Grids Using Reinforcement Learning
peer reviewedWe address the problem faced by the operator of a microgrid participating in a continuous real-time market. The microgrid consists of distributed generation, flexible loads and a storage device. The goal of the microgrid operator is the maximization of the profits over the entire trading horizon, while taking into account operational constraints. The variability of the Renewable Energy Sources (RES) is considered and the energy trading is modeled as a Markov Decision Process. The problem is solved using reinforcement learning (RL). The resulting optimal real time bidding strategy of a microgrid is discussed