1,539 research outputs found

    Active network management for electrical distribution systems: problem formulation, benchmark, and approximate solution

    Full text link
    With the increasing share of renewable and distributed generation in electrical distribution systems, Active Network Management (ANM) becomes a valuable option for a distribution system operator to operate his system in a secure and cost-effective way without relying solely on network reinforcement. ANM strategies are short-term policies that control the power injected by generators and/or taken off by loads in order to avoid congestion or voltage issues. Advanced ANM strategies imply that the system operator has to solve large-scale optimal sequential decision-making problems under uncertainty. For example, decisions taken at a given moment constrain the future decisions that can be taken and uncertainty must be explicitly accounted for because neither demand nor generation can be accurately forecasted. We first formulate the ANM problem, which in addition to be sequential and uncertain, has a nonlinear nature stemming from the power flow equations and a discrete nature arising from the activation of power modulation signals. This ANM problem is then cast as a stochastic mixed-integer nonlinear program, as well as second-order cone and linear counterparts, for which we provide quantitative results using state of the art solvers and perform a sensitivity analysis over the size of the system, the amount of available flexibility, and the number of scenarios considered in the deterministic equivalent of the stochastic program. To foster further research on this problem, we make available at http://www.montefiore.ulg.ac.be/~anm/ three test beds based on distribution networks of 5, 33, and 77 buses. These test beds contain a simulator of the distribution system, with stochastic models for the generation and consumption devices, and callbacks to implement and test various ANM strategies

    Reinforcement Learning for the Unit Commitment Problem

    Full text link
    In this work we solve the day-ahead unit commitment (UC) problem, by formulating it as a Markov decision process (MDP) and finding a low-cost policy for generation scheduling. We present two reinforcement learning algorithms, and devise a third one. We compare our results to previous work that uses simulated annealing (SA), and show a 27% improvement in operation costs, with running time of 2.5 minutes (compared to 2.5 hours of existing state-of-the-art).Comment: Accepted and presented in IEEE PES PowerTech, Eindhoven 2015, paper ID 46273

    Mixed-Integer Convex Nonlinear Optimization with Gradient-Boosted Trees Embedded

    Get PDF
    Decision trees usefully represent sparse, high dimensional and noisy data. Having learned a function from this data, we may want to thereafter integrate the function into a larger decision-making problem, e.g., for picking the best chemical process catalyst. We study a large-scale, industrially-relevant mixed-integer nonlinear nonconvex optimization problem involving both gradient-boosted trees and penalty functions mitigating risk. This mixed-integer optimization problem with convex penalty terms broadly applies to optimizing pre-trained regression tree models. Decision makers may wish to optimize discrete models to repurpose legacy predictive models, or they may wish to optimize a discrete model that particularly well-represents a data set. We develop several heuristic methods to find feasible solutions, and an exact, branch-and-bound algorithm leveraging structural properties of the gradient-boosted trees and penalty functions. We computationally test our methods on concrete mixture design instance and a chemical catalysis industrial instance
    corecore