8 research outputs found

    Selection of features in reinforcement learning applied to energy consumption forecast in buildings according to different contexts

    Get PDF
    The management of buildings responsible for the energy storage and control can be optimized with the support of forecasting techniques. These are essential on the finding of load consumption patterns being these last involved in decisions that analyze which forecasting technique results in more accurate predictions in each context. This paper considers two forecasting methods known as artificial neural network and k-nearest neighbor involved in the prediction of consumption of a building composed by devices recording consumption and sensors data. The forecasts are performed in five minutes periods with the forecasting technique taken into account as a potential to improve the accuracy of predictions. The decision making considers the Multi-armed Bandit in reinforcement learning context to find the best suitable algorithm in each five minutes period thus improving the predictions accuracy in forecasting. The reinforcement learning has been tested in upper confidence bound and greedy algorithms with several exploration alternatives. In the case-study, three contexts have been analyzed.The present work has been developed under the EUREKA - ITEA3 Project (ITEA-18008), Project TIoCPS (ANI|P2020 POCI-01-0247-FEDER-046182), and has received funding from European Regional Development Fund through COMPETE 2020. The work has been done also in the scope of projects UIDB/00760/2020, CEECIND/02887/2017, financed by FEDER Funds through COMPETE program and National Funds through (FCT), Portugal.info:eu-repo/semantics/publishedVersio

    Deep Reinforcement Learning for Control of Microgrids: A Review

    Get PDF
    A microgrid is widely accepted as a prominent solution to enhance resilience and performance in distributed power systems. Microgrids are flexible for adding distributed energy resources in the ecosystem of the electrical networks. Control techniques are used to synchronize distributed energy resources (DERs) due to their turbulent nature. DERs including alternating current, direct current and hybrid load with storage systems have been used in microgrids quite frequently due to which controlling the flow of energy in microgrids have been complex task with traditional control approaches. Distributed as well central approach to apply control algorithms is well-known methods to regulate frequency and voltage in microgrids. Recently techniques based of artificial intelligence are being applied for the problems that arise in operation and control of latest generation microgrids and smart grids. Such techniques are categorized in machine learning and deep learning in broader terms. The objective of this research is to survey the latest strategies of control in microgrids using the deep reinforcement learning approach (DRL). Other techniques of artificial intelligence had already been reviewed extensively but the use of DRL has increased in the past couple of years. To bridge the gap for the researchers, this survey paper is being presented with a focus on only Microgrids control DRL techniques for voltage control and frequency regulation with distributed, cooperative and multi agent approaches are presented in this research

    人工智能支撑新型电力系统能源供给及消纳

    Get PDF
    能源革命和数字革命方兴未艾,正共同推动中国能源电力系统向新型电力系统转型升级。人工智能有助于新型电力系统实现精准建模、高效分析及智能决策控制,是新型电力系统构建的关键支撑技术。通过对人工智能在电力系统源、网、荷、储等关键环节的预测、建模、分析、优化控制等核心应用的现状进行综述,对元学习、无监督预训练、可解释性与人机混合增强等人工智能领域的技术发展和其在新型电力系统的应用进行分析展望,为中国人工智能技术与新型电力系统的深度融合发展提供参考借鉴

    Prioritized experience replay based deep distributional reinforcement learning for battery operation in microgrids

    Get PDF
    This is the author accepted manuscript. The final version is available on open access from Elsevier via the DOI in this recordData availability: Data will be made available on request.Reinforcement Learning (RL) provides a pathway for efficiently utilizing the battery storage in a microgrid. However, traditional value-based RL algorithms used in battery management focus on formulating the policies based on the reward expectation rather than its probability distribution. Hence the scheduling strategy is solely based on the expectation of the rewards rather than the distribution. This paper focuses on scheduling strategy based on probability distribution of the rewards which optimally reflects the uncertainties in the incoming dataset. Furthermore, the prioritized experience replay samples of the training experience are used to enhance the quality of the learning by reducing bias. The results are obtained with different variants of distributional RL algorithms like C51, Quantile Regression Deep Q-Network (QR-DQN), Fully Quantizable Function (FQF), Implicit Quantile Networks (IQN) and rainbow. Moreover, the results are compared with the traditional deep Q-learning algorithm with prioritized experienced replay. The convergence results on the training dataset are further analyzed by varying the action spaces, using randomized experience replay and without including the tariff-based action while enforcing the penalties for violating battery SoC limits. The best trained Q-network is tested with different load and PV profiles to obtain the battery operation and costs. The performance of the distributional RL algorithms is analyzed under different schemes of Time of Use (ToU) tariff. QR-DQN with prioritized experience replay has been found to be the best performing algorithm in terms of convergence on the training dataset, with least fluctuation in validation dataset and battery operations during different tariff regimes during the day.European Regional Development Fun

    Online Scheduling of a Residential Microgrid via Monte-Carlo Tree Search and a Learned Model

    No full text
    The uncertainty of distributed renewable energy brings significant challenges to economic operation of microgrids. Conventional online optimization approaches require a forecast model. However, accurately forecasting the renewable power generations is still a tough task. To achieve online scheduling of a residential microgrid (RM) that does not need a forecast model to predict the future PV/wind and load power sequences, this article investigates the usage of reinforcement learning (RL) approach to tackle this challenge. Specifically, based on the recent development of Model-Based Reinforcement Learning, MuZero (Schrittwieser et al., 2019) we investigate its application to the RM scheduling problem. To accommodate the characteristics of the RM scheduling application, an optimization framework that combines the model-based RL agent with the mathematical optimization technique is designed, and long short-term memory (LSTM) units are adopted to extract features from the past renewable generation and load sequences. At each time step, the optimal decision is obtained by conducting Monte-Carlo tree search (MCTS) with a learned model and solving an optimal power flow sub-problem. In this way, this approach can sequentially make operational decisions online without relying on a forecast model. The numerical simulation results demonstrate the effectiveness of the proposed algorithm
    corecore