62 research outputs found

    On-Line Building Energy Optimization Using Deep Reinforcement Learning

    Get PDF
    Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power systems and to help customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using deep reinforcement learning, a hybrid type of methods that combines reinforcement learning with deep learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and deep policy gradient, both of which have been extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly dimensional database includes information about photovoltaic power generation, electric vehicles and buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide realtime feedback to consumers to encourage more efficient use of electricity

    Deep learning methods for on-line flexibility prediction and optimal resource allocation in smart buildings

    Get PDF
    Unprecedented high volume of data is available with the upward growth of the advanced metering infrastructure. Because the built environment is the largest user of electricity, a deeper look at building energy consumption holds promise for helping to achieve overall optimization of the energy system. Yet, a knowledge transfer from the fusion of extensive data is under development. To overcome this limitation, in the big data era, more and more machine learning methods appear to be suitable to automatically extract, predict and optimized building electrical patterns by performing successive transformation of the data. More recently, there has been a revival of interest in deep learning methods as the most advance on-line solutions for large-scale and real databases. Enabling real-time applications from the high level of aggregation in the smart grid will put end-users in position to change their consumption patterns, offering useful benefits for the system as a whole.<br/

    Deep learning methods for on-line flexibility prediction and optimal resource allocation in smart buildings

    Get PDF
    Unprecedented high volume of data is available with the upward growth of the advanced metering infrastructure. Because the built environment is the largest user of electricity, a deeper look at building energy consumption holds promise for helping to achieve overall optimization of the energy system. Yet, a knowledge transfer from the fusion of extensive data is under development. To overcome this limitation, in the big data era, more and more machine learning methods appear to be suitable to automatically extract, predict and optimized building electrical patterns by performing successive transformation of the data. More recently, there has been a revival of interest in deep learning methods as the most advance on-line solutions for large-scale and real databases. Enabling real-time applications from the high level of aggregation in the smart grid will put end-users in position to change their consumption patterns, offering useful benefits for the system as a whole.<br/

    Resilient Load Restoration in Microgrids Considering Mobile Energy Storage Fleets: A Deep Reinforcement Learning Approach

    Full text link
    Mobile energy storage systems (MESSs) provide mobility and flexibility to enhance distribution system resilience. The paper proposes a Markov decision process (MDP) formulation for an integrated service restoration strategy that coordinates the scheduling of MESSs and resource dispatching of microgrids. The uncertainties in load consumption are taken into account. The deep reinforcement learning (DRL) algorithm is utilized to solve the MDP for optimal scheduling. Specifically, the twin delayed deep deterministic policy gradient (TD3) is applied to train the deep Q-network and policy network, then the well trained policy can be deployed in on-line manner to perform multiple actions simultaneously. The proposed model is demonstrated on an integrated test system with three microgrids connected by Sioux Falls transportation network. The simulation results indicate that mobile and stationary energy resources can be well coordinated to improve system resilience.Comment: Submitted to 2020 IEEE Power and Energy Society General Meetin

    Deep Reinforcement Learning for Power Trading

    Full text link
    The Dutch power market includes a day-ahead market and an auction-like intraday balancing market. The varying supply and demand of power and its uncertainty induces an imbalance, which causes differing power prices in these two markets and creates an opportunity for arbitrage. In this paper, we present collaborative dual-agent reinforcement learning (RL) for bi-level simulation and optimization of European power arbitrage trading. Moreover, we propose two novel practical implementations specifically addressing the electricity power market. Leveraging the concept of imitation learning, the RL agent's reward is reformed by taking into account prior domain knowledge results in better convergence during training and, moreover, improves and generalizes performance. In addition, tranching of orders improves the bidding success rate and significantly raises the P&L. We show that each method contributes significantly to the overall performance uplifting, and the integrated methodology achieves about three-fold improvement in cumulative P&L over the original agent, as well as outperforms the highest benchmark policy by around 50% while exhibits efficient computational performance

    Sparse Training Theory for Scalable and Efficient Agents

    Get PDF
    A fundamental task for artificial intelligence is learning. Deep Neural Networks have proven to cope perfectly with all learning paradigms, i.e. supervised, unsupervised, and reinforcement learning. Nevertheless, traditional deep learning approaches make use of cloud computing facilities and do not scale well to autonomous agents with low computational resources. Even in the cloud, they suffer from computational and memory limitations, and they cannot be used to model adequately large physical worlds for agents which assume networks with billions of neurons. These issues are addressed in the last few years by the emerging topic of sparse training, which trains sparse networks from scratch. This paper discusses sparse training state-of-the-art, its challenges and limitations while introducing a couple of new theoretical research directions which has the potential of alleviating sparse training limitations to push deep learning scalability well beyond its current boundaries. Nevertheless, the theoretical advancements impact in complex multi-agents settings is discussed from a real-world perspective, using the smart grid case study
    • …
    corecore