77,917 research outputs found

    Reinforcement Learning for the Unit Commitment Problem

    Full text link
    In this work we solve the day-ahead unit commitment (UC) problem, by formulating it as a Markov decision process (MDP) and finding a low-cost policy for generation scheduling. We present two reinforcement learning algorithms, and devise a third one. We compare our results to previous work that uses simulated annealing (SA), and show a 27% improvement in operation costs, with running time of 2.5 minutes (compared to 2.5 hours of existing state-of-the-art).Comment: Accepted and presented in IEEE PES PowerTech, Eindhoven 2015, paper ID 46273

    Reinforcement learning and A* search for the unit commitment problem

    Get PDF
    Previous research has combined model-free reinforcement learning with model-based tree search methods to solve the unit commitment problem with stochastic demand and renewables generation. This approach was limited to shallow search depths and suffered from significant variability in run time across problem instances with varying complexity. To mitigate these issues, we extend this methodology to more advanced search algorithms based on A* search. First, we develop a problem-specific heuristic based on priority list unit commitment methods and apply this in Guided A* search, reducing run time by up to 94% with negligible impact on operating costs. In addition, we address the run time variability issue by employing a novel anytime algorithm, Guided IDA*, replacing the fixed search depth parameter with a time budget constraint. We show that Guided IDA* mitigates the run time variability of previous guided tree search algorithms and enables further operating cost reductions of up to 1%

    Reinforcement Learning and Tree Search Methods for the Unit Commitment Problem

    Get PDF
    The unit commitment (UC) problem, which determines operating schedules of generation units to meet demand, is a fundamental task in power systems operation. Existing UC methods using mixed-integer programming are not well-suited to highly stochastic systems. Approaches which more rigorously account for uncertainty could yield large reductions in operating costs by reducing spinning reserve requirements; operating power stations at higher efficiencies; and integrating greater volumes of variable renewables. A promising approach to solving the UC problem is reinforcement learning (RL), a methodology for optimal decision-making which has been used to conquer long-standing grand challenges in artificial intelligence. This thesis explores the application of RL to the UC problem and addresses challenges including robustness under uncertainty; generalisability across multiple problem instances; and scaling to larger power systems than previously studied. To tackle these issues, we develop guided tree search, a novel methodology combining model-free RL and model-based planning. The UC problem is formalised as a Markov decision process and we develop an open-source environment based on real data from Great Britain's power system to train RL agents. In problems of up to 100 generators, guided tree search is shown to be competitive with deterministic UC methods, reducing operating costs by up to 1.4\%. An advantage of RL is that the framework can be easily extended to incorporate considerations important to power systems operators such as robustness to generator failure, wind curtailment or carbon prices. When generator outages are considered, guided tree search saves over 2\% in operating costs as compared with methods using conventional N−xN-x reserve criteria

    Real-time scheduling of renewable power systems through planning-based reinforcement learning

    Full text link
    The growing renewable energy sources have posed significant challenges to traditional power scheduling. It is difficult for operators to obtain accurate day-ahead forecasts of renewable generation, thereby requiring the future scheduling system to make real-time scheduling decisions aligning with ultra-short-term forecasts. Restricted by the computation speed, traditional optimization-based methods can not solve this problem. Recent developments in reinforcement learning (RL) have demonstrated the potential to solve this challenge. However, the existing RL methods are inadequate in terms of constraint complexity, algorithm performance, and environment fidelity. We are the first to propose a systematic solution based on the state-of-the-art reinforcement learning algorithm and the real power grid environment. The proposed approach enables planning and finer time resolution adjustments of power generators, including unit commitment and economic dispatch, thus increasing the grid's ability to admit more renewable energy. The well-trained scheduling agent significantly reduces renewable curtailment and load shedding, which are issues arising from traditional scheduling's reliance on inaccurate day-ahead forecasts. High-frequency control decisions exploit the existing units' flexibility, reducing the power grid's dependence on hardware transformations and saving investment and operating costs, as demonstrated in experimental results. This research exhibits the potential of reinforcement learning in promoting low-carbon and intelligent power systems and represents a solid step toward sustainable electricity generation.Comment: 12 pages, 7 figure

    Why Training Doesn't Stick: Who is to Blame?

    Get PDF
    This article, "Why Training Doesn't Stick," presupposes that it does not, and that, as a matter of course, it is a waste of precious dollars to send someone to a workshop or a seminar for training. Soon after training goes the assumption that the trainee will be doing things the old way. While acknowledging that at least sometimes that training does stick, the author has come to understand that the conditions under which training is successful are so specific and so rarely met that when it happens it is the exception rather than the rule. "Who is to blame?" The author answers that question by explaining how we can turn the tables and make "training that sticks" the rule rather than the exception.published or submitted for publicatio
    • …
    corecore