448 research outputs found

    Definition and evaluation of model-free coordination of electrical vehicle charging with reinforcement learning

    Get PDF
    Demand response (DR) becomes critical to manage the charging load of a growing electric vehicle (EV) deployment. Initial DR studies mainly adopt model predictive control, but models are largely uncertain for the EV scenario (e.g., customer behavior). Model-free approaches, based on reinforcement learning (RL), are an attractive alternative. We propose a new Markov decision process (MDP) formulation in the RL framework, to jointly coordinate a set of charging stations. State-of-the-art algorithms either focus on a single EV, or control an aggregate of EVs in multiple steps (e.g., 1) make aggregate load decisions and 2) translate the aggregate decision to individual EVs). In contrast, our RL approach jointly controls the whole set of EVs at once. We contribute a new MDP formulation with a scalable state representation independent of the number of charging stations. Using a batch RL algorithm, fitted QQ -iteration, we learn an optimal charging policy. With simulations using real-world data, we: 1) differentiate settings in training the RL policy (e.g., the time span covered by training data); 2) compare its performance to an oracle all-knowing benchmark (providing an upper performance bound); 3) analyze performance fluctuations throughout a full year; and 4) demonstrate generalization capacity to larger sets of charging stations

    Reinforcement learning for EV charging optimization : A holistic perspective for commercial vehicle fleets

    Get PDF
    Recent years have seen an unprecedented uptake in electric vehicles, driven by the global push to reduce carbon emissions. At the same time, intermittent renewables are being deployed increasingly. These developments are putting flexibility measures such as dynamic load management in the spotlight of the energy transition. Flexibility measures must consider EV charging, as it has the ability to introduce grid constraints: In Germany, the cumulative power of all EV onboard chargers amounts to ca. 120 GW, while the German peak load only amounts to 80 GW. Commercial operations have strong incentives to optimize charging and flatten peak loads in real-time, given that the highest quarter-hour can determine the power-related energy bill, and that a blown fuse due to overloading can halt operations. Increasing research efforts have therefore gone into real-time-capable optimization methods. Reinforcement Learning (RL) has particularly gained attention due to its versatility, performance and real- time capabilities. This thesis implements such an approach and introduces FleetRL as a realistic RL environment for EV charging, with a focus on commercial vehicle fleets. Through its implementation, it was found that RL saved up to 83% compared to static benchmarks, and that grid overloading was entirely avoided in some scenarios by sacrificing small portions of SOC, or by delaying the charging process. Linear optimization with one year of perfect knowledge outperformed RL, but reached its practical limits in one use-case, where a feasible solution could not be found by the solver. Overall, this thesis makes a strong case for RL-based EV charging. It further provides a foundation which can be built upon: a modular, open-source software framework that integrates an MDP model, schedule generation, and non-linear battery degradationElektrifieringen av transportsektorn Àr en nödvÀndig men utmanande uppgift. I kombination med ökande solcellsproduktion och förnybara energikÀllor skapar det ett dilemma för elnÀtet som krÀver omfattande flexibilitetsÄtgÀrder. Dessa ÄtgÀrder mÄste inkludera laddning av elbilar, ett fenomen som har lett till aldrig tidigare skÄdade belastningstoppar. Ur ett kommersiellt perspektiv Àr incitamentet att optimera laddningsprocessen och sÀkerstÀlla drifttid. Forskningen har fokuserat pÄ realtidsoptimeringsmetoder som Deep Reinforcement Learning (DRL). Denna avhandling introducerar FleetRL som en ny RL-miljö för EV-laddning av kommersiella flottor. Genom att tillÀmpa ramverket visade det sig att RL sparade upp till 83% jÀmfört med statiska riktmÀrken, och att överbelastning av nÀtet helt kunde undvikas i de flesta scenarier. LinjÀr optimering övertrÀffade RL men nÄdde sina grÀnser i snÀvt begrÀnsade anvÀndningsfall. Efter att ha funnit ett positivt business case för varje kommersiellt anvÀndningsomrÄde, ger denna avhandling ett starkt argument för RL-baserad laddning och en grund för framtida arbete via praktiska insikter och ett modulÀrt mjukvaruramverk med öppen kÀllko

    A Novel Reinforcement Learning-Optimization Approach for Integrating Wind Energy to Power System with Vehicle-to-Grid Technology

    Get PDF
    High integration of intermittent renewable energy sources (RES), specifically wind power, has created complexities in power system operations due to their limited controllability and predictability. In addition, large fleets of Electric Vehicles (EVs) are expected to have a large impact on electricity consumption, contributing to the volatility. In this dissertation, a well-coordinated smart charging approach is developed that utilizes the flexibility of EV owners in a way where EVs are used as distributed energy storage units and flexible loads to absorb the fluctuations in the wind power output in a vehicle-to-grid (V2G) setup. Challenges for people participation in V2G, such as battery degradation and insecurity about unexpected trips, are also addressed by using an interactive mechanism in smart grid. First, a static deterministic model is formulated using multi-objective mixed-integer quadratic programming (MIQP) assuming known parameters day ahead of time. Subsequently, a formulation for real-time dynamic schedule is provided using a rolling-horizon with expected value approximation. Simulation experiments demonstrate a significant increase in wind utilization and reduction in charging cost and battery degradation compared to an uncontrolled charging scenario. Formulating the scheduling problem of the EV-wind integrated power system using conventional stochastic programming (SP) approaches is challenging due to the presence of many uncertain parameters with unknown underlying distributions, such as wind, price, and different commuting patterns of EV owners. To alleviate the problem, a model-free Reinforcement Learning (RL) algorithm integrated with deterministic optimization is proposed that can be applied on many multi-stage stochastic problems while mitigating some of the challenges of conventional SP methods (e.g., large scenario tree, computational complexity) as well as the challenges in model-free RL (e.g., slow convergence, unstable learning in dynamic environment). The simulation results of applying the combined approach on the EV scheduling problem demonstrate the effectiveness of the RL-Optimization method in solving the multi-stage EV charge/discharge scheduling problem. The proposed methods perform better than standard RL approaches (e.g., DDQN) in terms of convergence speed and finding the global optima. Moreover, to address the curse of dimensionality issue in RL with large action-state space, a heuristic EV fleet charging/discharging scheme is used combined with RL-optimization approach to solve the EV scheduling problem for a large number of EVs
    • 

    corecore