27 research outputs found

    Noisy Optimization: Convergence with a Fixed Number of Resamplings

    Get PDF
    International audienceIt is known that evolution strategies in continuous domains might not converge in the presence of noise. It is also known that, under mild assumptions, and using an increasing number of resamplings, one can mitigate the effect of additive noise and recover convergence. We show new sufficient conditions for the convergence of an evolutionary algorithm with constant number of resamplings; in particular, we get fast rates (log-linear convergence) provided that the variance decreases around the optimum slightly faster than in the so-called multiplicative noise model

    Algorithm Portfolios for Noisy Optimization

    Get PDF
    Noisy optimization is the optimization of objective functions corrupted by noise. A portfolio of solvers is a set of solvers equipped with an algorithm selection tool for distributing the computational power among them. Portfolios are widely and successfully used in combinatorial optimization. In this work, we study portfolios of noisy optimization solvers. We obtain mathematically proved performance (in the sense that the portfolio performs nearly as well as the best of its solvers) by an ad hoc portfolio algorithm dedicated to noisy optimization. A somehow surprising result is that it is better to compare solvers with some lag, i.e., propose the current recommendation of best solver based on their performance earlier in the run. An additional finding is a principled method for distributing the computational power among solvers in the portfolio.Comment: in Annals of Mathematics and Artificial Intelligence, Springer Verlag, 201

    Multivariate bias reduction in capacity expansion planning

    Get PDF
    International audienceThe optimization of capacities in large scale power systems is a stochastic problem, because the need for storage and connections (i.e. exchange capacities) varies a lot from one week to another (e.g. power generation is subject to the vagaries of wind) and from one winter to another (e.g. water inflows due to snow melting). It is usually tackled through sample average approximation, i.e. assuming that the system which is optimal on average over the last 40 years (corrected for climate change) is also approximately optimal in general. However, in many cases, data are high-dimensional; the sample complexity, i.e. the amount of data necessary for a relevant optimization of capacities, increases linearly with the number of parameters and can be scarcely available at the relevant scale. This leads to an underestimation of capacities. We suggest the use of bias correction in capacity estimation. The present paper investigates the importance of the bias phenomenon, and the efficiency of bias correction tools (jackknife, bootstrap; combined with possibly penalized cross-validation) including new ones (dimension reduction tools, margin method

    Direct model predictive control: A theoretical and numerical analysis

    Full text link
    This paper focuses on online control policies applied to power systems management. In this study, the power system problem is formulated as a stochastic decision process with large constrained action space, high stochasticity and dozens of state variables. Direct Model Predictive Control has previously been proposed to encompass a large class of stochastic decision making problems. It is a hybrid model which merges the properties of two different dynamic optimization methods, Model Predictive Control and Stochastic Dual Dynamic Programming. In this paper, we prove that Direct Model Predictive Control reaches an optimal policy for a wider class of decision processes than those solved by Model Predictive Control (suboptimal by nature), Stochastic Dynamic Programming (which needs a moderate size of state space) or Stochastic Dual Dynamic Programming (which requires convexity of Bellman values and a moderate complexity of the random value state). The algorithm is tested on a multiple-battery management problem and two hydroelectric problems. Direct Model Predictive Control clearly outperforms Model Predictive Control on the tested problems. © 2018 Power Systems Computation Conference

    Depth, balancing, and limits of the Elo model

    Get PDF
    -Much work has been devoted to the computational complexity of games. However, they are not necessarily relevant for estimating the complexity in human terms. Therefore, human-centered measures have been proposed, e.g. the depth. This paper discusses the depth of various games, extends it to a continuous measure. We provide new depth results and present tool (given-first-move, pie rule, size extension) for increasing it. We also use these measures for analyzing games and opening moves in Y, NoGo, Killall Go, and the effect of pie rules

    Parallel Evolutionary Algorithms Performing Pairwise Comparisons

    Get PDF
    International audienceWe study mathematically and experimentally the conver-gence rate of differential evolution and particle swarm opti-mization for simple unimodal functions. Due to paralleliza-tion concerns, the focus is on lower bounds on the runtime, i.e upper bounds on the speed-up, as a function of the pop-ulation size. Two cases are particularly relevant: A popula-tion size of the same order of magnitude as the dimension and larger population sizes. We use the branching factor as a tool for proving bounds and get, as upper bounds, a lin-ear speed-up for a population size similar to the dimension, and a logarithmic speed-up for larger population sizes. We then propose parametrizations for differential evolution and particle swarm optimization that reach these bounds

    Depth, balancing, and limits of the Elo model

    Get PDF
    International audience—Much work has been devoted to the computational complexity of games. However, they are not necessarily relevant for estimating the complexity in human terms. Therefore, human-centered measures have been proposed, e.g. the depth. This paper discusses the depth of various games, extends it to a continuous measure. We provide new depth results and present tool (given-first-move, pie rule, size extension) for increasing it. We also use these measures for analyzing games and opening moves in Y, NoGo, Killall Go, and the effect of pie rules

    Traitement de l'incertitude en optimisation

    No full text
    This research is motivated by the need to find out new methods to optimize a power system. In this field, traditional management and investment methods are limited in front of highly stochastic problems which occur when introducing renewable energies at a large scale. After introducing the various facets of power system optimization, we discuss the continuous black-box noisy optimization problem and then some noisy cases with extra features.Regarding the contribution to continuous black-box noisy optimization, we are interested into finding lower and upper bounds on the rate of convergence of various families of algorithms. We study the convergence of comparison-based algorithms, including Evolution Strategies, in front of different strength of noise (small, moderate and big). We also extend the convergence results in the case of value-based algorithms when dealing with small noise. Last, we propose a selection tool to choose, between several noisy optimization algorithms, the best one on a given problem.For the contribution to noisy cases with additional constraints, the delicate cases, we introduce concepts from reinforcement learning, decision theory and statistic fields. We aim to propose optimization methods closer from the reality (in terms of modelling) and more robust. We also look for less conservative power system reliability criteria.Ces recherches sont motivées par la nécessité de développer de nouvelles méthodes d'optimisation des systèmes électriques. Dans ce domaine, les méthodes usuelles de contrôle et d'investissement sont à présent limitées de par les problèmes comportant une grande part d'aléa, qui interviennent lors de l'introduction massive d'énergies renouvelables. Après la présentation des différentes facettes de l'optimisation d'un système électrique, nous discuterons le problème d'optimisation continue bruitée de type boîte noire puis des cas bruités comprenant des caractéristiques supplémentaires.Concernant la contribution à l'optimisation continue bruitée de type boîte noire, nous nous intéresserons aux bornes inférieures et supérieures du taux de convergence de différentes familles d'algorithmes. Nous étudierons la convergence d'algorithmes basés sur les comparaisons, en particuliers les Stratégies d'Evolution, face à différents niveaux de bruit (faible, modéré et fort). Nous étendrons également les résultats de convergence des algorithmes basés sur les évaluations lors d'un bruit faible. Finalement, nous proposerons une méthode de sélection pour choisir le meilleur algorithme, parmi un éventail d'algorithme d'optimisation bruitée, sur un problème donné.Pour ce qui est de la contribution aux cas bruités avec des contraintes supplémentaires, les cas délicats, nous introduirons des concepts issus de l'apprentissage par renforcement, de la théorie de la décision et des statistiques. L'objectif est de proposer des méthodes d'optimisation plus proches de la réalité (en termes de modélisation) et plus robuste. Nous rechercherons également des critères de fiabilité des systèmes électriques moins conservatifs

    Multivariate bias reduction in capacity expansion planning

    Get PDF
    International audienceThe optimization of capacities in large scale power systems is a stochastic problem, because the need for storage and connections (i.e. exchange capacities) varies a lot from one week to another (e.g. power generation is subject to the vagaries of wind) and from one winter to another (e.g. water inflows due to snow melting). It is usually tackled through sample average approximation, i.e. assuming that the system which is optimal on average over the last 40 years (corrected for climate change) is also approximately optimal in general. However, in many cases, data are high-dimensional; the sample complexity, i.e. the amount of data necessary for a relevant optimization of capacities, increases linearly with the number of parameters and can be scarcely available at the relevant scale. This leads to an underestimation of capacities. We suggest the use of bias correction in capacity estimation. The present paper investigates the importance of the bias phenomenon, and the efficiency of bias correction tools (jackknife, bootstrap; combined with possibly penalized cross-validation) including new ones (dimension reduction tools, margin method
    corecore