5 research outputs found

    Deep Reinforcement Learning for Heat Pump Control

    Full text link
    Heating in private households is a major contributor to the emissions generated today. Heat pumps are a promising alternative for heat generation and are a key technology in achieving our goals of the German energy transformation and to become less dependent on fossil fuels. Today, the majority of heat pumps in the field are controlled by a simple heating curve, which is a naive mapping of the current outdoor temperature to a control action. A more advanced control approach is model predictive control (MPC) which was applied in multiple research works to heat pump control. However, MPC is heavily dependent on the building model, which has several disadvantages. Motivated by this and by recent breakthroughs in the field, this work applies deep reinforcement learning (DRL) to heat pump control in a simulated environment. Through a comparison to MPC, it could be shown that it is possible to apply DRL in a model-free manner to achieve MPC-like performance. This work extends other works which have already applied DRL to building heating operation by performing an in-depth analysis of the learned control strategies and by giving a detailed comparison of the two state-of-the-art control methods

    Aggregation of Power Capabilities of Heterogeneous Resources for Real-Time Control of Power Grids

    Get PDF
    Aggregation of electric resources is a fundamental function for the operation of power grids at different time scales. In the context of a recently proposed framework for the real-time control of microgrids with explicit power setpoints, we define and formally specify an aggregation method that explicitly accounts for delays and message asynchronism. The method allows to abstract the details of resources using high-level concepts that are device and grid-independent. We demonstrate the application of the method to a Cigre benchmark with heterogenous and lowinertia resources

    Aggregation of Power Capabilities of Heterogeneous Resources for Real-Time Control of Power Grids

    Get PDF
    Aggregation of electric resources is a fundamental function for the operation of power grids at different time scales. In the context of a recently proposed framework for the real-time control of microgrids with explicit power setpoints, we define and formally specify an aggregation method that explicitly accounts for delays and message asynchronism. The method allows to abstract the details of resources using high-level concepts that are device and grid-independent. We demonstrate the application of the method to a Cigre benchmark with heterogenous and lowinertia resources

    A Novel Reinforcement Learning-Optimization Approach for Integrating Wind Energy to Power System with Vehicle-to-Grid Technology

    Get PDF
    High integration of intermittent renewable energy sources (RES), specifically wind power, has created complexities in power system operations due to their limited controllability and predictability. In addition, large fleets of Electric Vehicles (EVs) are expected to have a large impact on electricity consumption, contributing to the volatility. In this dissertation, a well-coordinated smart charging approach is developed that utilizes the flexibility of EV owners in a way where EVs are used as distributed energy storage units and flexible loads to absorb the fluctuations in the wind power output in a vehicle-to-grid (V2G) setup. Challenges for people participation in V2G, such as battery degradation and insecurity about unexpected trips, are also addressed by using an interactive mechanism in smart grid. First, a static deterministic model is formulated using multi-objective mixed-integer quadratic programming (MIQP) assuming known parameters day ahead of time. Subsequently, a formulation for real-time dynamic schedule is provided using a rolling-horizon with expected value approximation. Simulation experiments demonstrate a significant increase in wind utilization and reduction in charging cost and battery degradation compared to an uncontrolled charging scenario. Formulating the scheduling problem of the EV-wind integrated power system using conventional stochastic programming (SP) approaches is challenging due to the presence of many uncertain parameters with unknown underlying distributions, such as wind, price, and different commuting patterns of EV owners. To alleviate the problem, a model-free Reinforcement Learning (RL) algorithm integrated with deterministic optimization is proposed that can be applied on many multi-stage stochastic problems while mitigating some of the challenges of conventional SP methods (e.g., large scenario tree, computational complexity) as well as the challenges in model-free RL (e.g., slow convergence, unstable learning in dynamic environment). The simulation results of applying the combined approach on the EV scheduling problem demonstrate the effectiveness of the RL-Optimization method in solving the multi-stage EV charge/discharge scheduling problem. The proposed methods perform better than standard RL approaches (e.g., DDQN) in terms of convergence speed and finding the global optima. Moreover, to address the curse of dimensionality issue in RL with large action-state space, a heuristic EV fleet charging/discharging scheme is used combined with RL-optimization approach to solve the EV scheduling problem for a large number of EVs
    corecore