2 research outputs found

    Learning optimal environments using projected stochastic gradient ascent

    Full text link
    In this work, we propose a new methodology for jointly sizing a dynamical system and designing its control law. First, the problem is formalized by considering parametrized reinforcement learning environments and parametrized policies. The objective of the optimization problem is to jointly find a control policy and an environment over the joint hypothesis space of parameters such that the sum of rewards gathered by the policy in this environment is maximal. The optimization problem is then addressed by generalizing the direct policy search algorithms to an algorithm we call Direct Environment Search with (projected stochastic) Gradient Ascent (DESGA). We illustrate the performance of DESGA on two benchmarks. First, we consider a parametrized space of Mass-Spring-Damper (MSD) environments and control policies. Then, we use our algorithm for optimizing the size of the components and the operation of a small-scale autonomous energy system, i.e. a solar off-grid microgrid, composed of photovoltaic panels, batteries, etc. On both benchmarks, we compare the results of the execution of DESGA with a theoretical upper-bound on the expected return. Furthermore, the performance of DESGA is compared to an alternative algorithm. The latter performs a grid discretization of the environment's hypothesis space and applies the REINFORCE algorithm to identify pairs of environments and policies resulting in a high expected return. The choice of this algorithm is also discussed and motivated. On both benchmarks, we show that DESGA and the alternative algorithm result in a set of parameters for which the expected return is nearly equal to its theoretical upper-bound. Nevertheless, the execution of DESGA is much less computationally costly

    Deep Reinforcement Learning for the Control of Energy Storage in Grid-Scale and Microgrid Applications

    Full text link
    The European and worldwide directives and targets for renewable energy integration, motivated by the imminent need to decarbonize the electricity sector, are imposing severe changes to the conventional electrical power system. The inherent unpredictability of the instantaneous energy production from variable renewable energy sources (VRES) is expected to make the reliable and secure operation of the system, a challenging task. Flexibility, and in particular, energy storage is expected to assume a key role in the integration of large shares of VRES in the power system, and thus, in the transition towards a carbon-free electricity sector. One of the main storage mechanisms that can facilitate the integration of VRES is energy arbitrage, i.e. the transfer of electrical energy from a period of low demand to another period of high demand. In this thesis, we investigate and develop novel operating strategies for maximizing the value of energy arbitrage from storage units at different scales (i.e. grid-scale or distributed) and in different settings (i.e. interconnected or off-grid). The decision-making process of an operator optimizing the energy arbitrage value of storage is an inherently complex problem, mainly due to uncertainties induced by: i) the stochasticity of market prices and ii) the variability of renewable generation. In view of the great successes of deep reinforcement learning (DRL) in solving challenging tasks, the goal of this thesis is to investigate its potential in solving problems related to the control of storage in modern energy systems. Firstly, we address the energy arbitrage problem of a storage unit that participates in the European Continuous Intraday (CID) market. We develop an operational strategy in order to maximize its arbitrage value. A novel modeling framework for the strategic participation of energy storage in the European CID market is proposed, where exchanges occur through a process similar to the stock market. A detailed description of the market mechanism and the storage system management is provided. A set of necessary simplifications that constitutes the problem tractable are described. The resulting problem is solved using a state-of-the-art DRL algorithm. The outcome of the proposed method is compared with the state-of-the-art industrial practices and the resulting policy is found able to outperform this benchmark. Secondly, we address the energy arbitrage problem faced by an off-grid microgrid operator in the context of rural electrification. In particular, we propose a novel model-based reinforcement learning algorithm that is able to control the storage device in order to accommodate the different changes that might occur over the microgrid lifetime. The algorithm demonstrates generalisation properties, transfer capabilities and better robustness in case of fast-changing system dynamics. The proposed algorithm is compared against two benchmarks, namely a rule-based and a model predictive controller (MPC). The results show that the trained agent is able to outperform both benchmarks in the lifelong setting where the system dynamics are changing over time. In the context of an off grid-microgrid, the optimal size of the components (i.e. the capacity of photovoltaic (PV) panels, storage) depends heavily on the control policy applied. In this thesis, we propose a new methodology for jointly sizing a system and designing its control law that is based on reinforcement learning. The objective of the optimization problem is to jointly find a control policy and an environment over the joint hypothesis space of parameters such that the sum of the initial investment and the operational cost are minimized. The optimization problem is then addressed by generalizing the direct policy search algorithms to an algorithm we call Direct Environment Search with (projected stochastic) Gradient Ascent (DESGA). We illustrate the performance of DESGA on two benchmarks. First, we consider a parametrized space of Mass-Spring-Damper (MSD) environments and control policies. Then, we use our algorithm for optimizing the size of the components and the operation of a small-scale autonomous energy system, i.e. a solar off-grid microgrid, composed of photovoltaic panels, batteries. On both benchmarks, we show that DESGA results in a set of parameters for which the expected return is nearly equal to its theoretical upper-bound. Finally, we provide the general conclusions and remarks of this thesis and we propose a list of future research directions that emerge as an outcome of this work
    corecore