718 research outputs found

    Evolutionary algorithms for dynamic optimization problems: workshop preface

    Get PDF
    Copyright @ 2005 AC

    Reliability-based optimization for multiple constraints with evolutionary algorithms

    Get PDF
    In this paper, we combine reliability-based optimization with a multi-objective evolutionary algorithm for handling uncertainty in decision variables and parameters. This work is an extension to a previous study by the second author and his research group to more accurately compute a multi-constraint reliability. This means that the overall reliability of a solution regarding all constraints is examined, instead of a reliability computation of only one critical constraint. First, we present a brief introduction into this so-called 'structural reliability' aspects. Thereafter, we introduce a method for identifying inactive constraints according to the reliability evaluation. With this method, we show that with less number of constraint evaluations, an identical solution can be achieved. Furthermore, we apply our approach to a number of problems including a real-world car side impact design problem to illustrate our method

    Genetic algorithms with elitism-based immigrants for changing optimization problems

    Get PDF
    Copyright @ Springer-Verlag Berlin Heidelberg 2007.Addressing dynamic optimization problems has been a challenging task for the genetic algorithm community. Over the years, several approaches have been developed into genetic algorithms to enhance their performance in dynamic environments. One major approach is to maintain the diversity of the population, e.g., via random immigrants. This paper proposes an elitism-based immigrants scheme for genetic algorithms in dynamic environments. In the scheme, the elite from previous generation is used as the base to create immigrants via mutation to replace the worst individuals in the current population. This way, the introduced immigrants are more adapted to the changing environment. This paper also proposes a hybrid scheme that combines the elitism-based immigrants scheme with traditional random immigrants scheme to deal with significant changes. The experimental results show that the proposed elitism-based and hybrid immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments

    Multi-objective worst case optimization by means of evolutionary algorithms

    Get PDF
    Many real-world optimization problems are subject to uncertainty. A possible goal is then to find a solution which is robust in the sense that it has the best worst-case performance over all possible scenarios. However, if the problem also involves mul- tiple objectives, which scenario is “best” or “worst” depends on the user’s weighting of the different criteria, which is generally difficult to specify before alternatives are known. Evolutionary multi-objective optimization avoids this problem by searching for the whole front of Pareto optimal solutions. This paper extends the concept of Pareto dominance to worst case optimization problems and demonstrates how evolu- tionary algorithms can be used for worst case optimization in a multi-objective setting

    Associative memory scheme for genetic algorithms in dynamic environments

    Get PDF
    Copyright @ Springer-Verlag Berlin Heidelberg 2006.In recent years dynamic optimization problems have attracted a growing interest from the community of genetic algorithms with several approaches developed to address these problems, of which the memory scheme is a major one. In this paper an associative memory scheme is proposed for genetic algorithms to enhance their performance in dynamic environments. In this memory scheme, the environmental information is also stored and associated with current best individual of the population in the memory. When the environment changes the stored environmental information that is associated with the best re-evaluated memory solution is extracted to create new individuals into the population. Based on a series of systematically constructed dynamic test environments, experiments are carried out to validate the proposed associative memory scheme. The environmental results show the efficiency of the associative memory scheme for genetic algorithms in dynamic environments

    Bayesian simulation optimization with input uncertainty

    Get PDF
    We consider simulation optimization in the presence of input uncertainty. In particular, we assume that the input distribution can be described by some continuous parameters, and that we have some prior knowledge defining the probability distribution for these parameters. We then seek the simulation design that has the best expected performance over the possible parameters of the input distributions. Assuming correlation of performance between solutions and also between input distributions, we propose modifications of two well-known simulation optimization algorithms, Efficient Global Optimization and Knowledge Gradient with Continuous Parameters, so that they work efficiently under input uncertainty

    Continuous multi-task Bayesian optimisation with correlation

    Get PDF
    This paper considers the problem of simultaneously identifying the optima for a (continuous or discrete) set of correlated tasks, where the performance of a particular input parameter on a particular task can only be estimated from (potentially noisy) samples. This has many applications, for example, identifying a stochastic algorithm’s optimal parameter settings for various tasks described by continuous feature values. We adapt the framework of Bayesian Optimisation to this problem. We propose a general multi-task optimisation framework and two myopic sampling procedures that determine task and parameter values for sampling, in order to efficiently find the best parameter setting for all tasks simultaneously. We show experimentally that our methods are much more efficient than collecting information randomly, and also more efficient than two other Bayesian multi-task optimisation algorithms from the literature

    Triggered memory-based swarm optimization in dynamic environments

    Get PDF
    This is a post-print version of this article - Copyright @ 2007 Springer-VerlagIn recent years, there has been an increasing concern from the evolutionary computation community on dynamic optimization problems since many real-world optimization problems are time-varying. In this paper, a triggered memory scheme is introduced into the particle swarm optimization to deal with dynamic environments. The triggered memory scheme enhances traditional memory scheme with a triggered memory generator. Experimental study over a benchmark dynamic problem shows that the triggered memory-based particle swarm optimization algorithm has stronger robustness and adaptability than traditional particle swarm optimization algorithms, both with and without traditional memory scheme, for dynamic optimization problems

    Efficient information collection on portfolios

    Get PDF
    This paper tackles the problem of efficiently collecting data to learn a classifier, or mapping, from each task to the best performing tool, where tasks are described by continuous features and there is a portfolio of tools to choose from. A typical example is selecting an optimization algorithm from a portfolio of algorithms, based on some features of the problem instance to be solved. Information is collected by testing a tool on a task and observing its (possibly stochastic) performance. The goal is to minimize the opportunity cost of the constructed mapping, where opportunity cost is the difference between the performance of the true best tool for each task, and the performance of the tool chosen by the constructed mapping, summed over all tasks. We propose several fully sequential information collection policies based on Bayesian statistics and Gaussian Process models. In each step, they myopically sample the (task, tool) pair that promises the highest value of the information collected. We prove optimality under certain conditions and empirically demonstrate that our methods significantly outperform standard approaches on a set of synthetic benchmark problems
    corecore