516,378 research outputs found

    Triggered memory-based swarm optimization in dynamic environments

    Get PDF
    This is a post-print version of this article - Copyright @ 2007 Springer-VerlagIn recent years, there has been an increasing concern from the evolutionary computation community on dynamic optimization problems since many real-world optimization problems are time-varying. In this paper, a triggered memory scheme is introduced into the particle swarm optimization to deal with dynamic environments. The triggered memory scheme enhances traditional memory scheme with a triggered memory generator. Experimental study over a benchmark dynamic problem shows that the triggered memory-based particle swarm optimization algorithm has stronger robustness and adaptability than traditional particle swarm optimization algorithms, both with and without traditional memory scheme, for dynamic optimization problems

    Explicit memory schemes for evolutionary algorithms in dynamic environments

    Get PDF
    Copyright @ 2007 Springer-VerlagProblem optimization in dynamic environments has atrracted a growing interest from the evolutionary computation community in reccent years due to its importance in real world optimization problems. Several approaches have been developed to enhance the performance of evolutionary algorithms for dynamic optimization problems, of which the memory scheme is a major one. This chapter investigates the application of explicit memory schemes for evolutionary algorithms in dynamic environments. Two kinds of explicit memory schemes: direct memory and associative memory, are studied within two classes of evolutionary algorithms: genetic algorithms and univariate marginal distribution algorithms for dynamic optimization problems. Based on a series of systematically constructed dynamic test environments, experiments are carried out to investigate these explicit memory schemes and the performance of direct and associative memory schemes are campared and analysed. The experimental results show the efficiency of the memory schemes for evolutionary algorithms in dynamic environments, especially when the environment changes cyclically. The experimental results also indicate that the effect of the memory schemes depends not only on the dynamic problems and dynamic environments but also on the evolutionary algorithm used

    Population-based incremental learning with memory scheme for changing environments

    Get PDF
    Copyright @ 2005 ACMIn recent years there has been a growing interest in studying evolutionary algorithms for dynamic optimization problems due to its importance in real world applications. Several approaches have been developed, such as the memory scheme. This paper investigates the application of the memory scheme for population-based incremental learning (PBIL) algorithms, a class of evolutionary algorithms, for dynamic optimization problems. A PBIL-specific memory scheme is proposed to improve its adaptability in dynamic environments. In this memory scheme the working probability vector is stored together with the best sample it creates in the memory and is used to reactivate old environments when change occurs. Experimental study based on a series of dynamic environments shows the efficiency of the memory scheme for PBILs in dynamic environments. In this paper, the relationship between the memory scheme and the multipopulation scheme for PBILs in dynamic environments is also investigated. The experimental results indicate a negative interaction of the multi-population scheme on the memory scheme for PBILs in the dynamic test environments

    Online Optimization with Memory and Competitive Control

    Get PDF
    This paper presents competitive algorithms for a novel class of online optimization problems with memory. We consider a setting where the learner seeks to minimize the sum of a hitting cost and a switching cost that depends on the previous p decisions. This setting generalizes Smoothed Online Convex Optimization. The proposed approach, Optimistic Regularized Online Balanced Descent, achieves a constant, dimension-free competitive ratio. Further, we show a connection between online optimization with memory and online control with adversarial disturbances. This connection, in turn, leads to a new constant-competitive policy for a rich class of online control problems

    Memory-enhanced univariate marginal distribution algorithms for dynamic optimization problems

    Get PDF
    Several approaches have been developed into evolutionary algorithms to deal with dynamic optimization problems, of which memory and random immigrants are two major schemes. This paper investigates the application of a direct memory scheme for univariate marginal distribution algorithms (UMDAs), a class of evolutionary algorithms, for dynamic optimization problems. The interaction between memory and random immigrants for UMDAs in dynamic environments is also investigated. Experimental study shows that the memory scheme is efficient for UMDAs in dynamic environments and that the interactive effect between memory and random immigrants for UMDAs in dynamic environments depends on the dynamic environments

    A Computationally Efficient Limited Memory CMA-ES for Large Scale Optimization

    Full text link
    We propose a computationally efficient limited memory Covariance Matrix Adaptation Evolution Strategy for large scale optimization, which we call the LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems in continuous domain. Inspired by the limited memory BFGS method of Liu and Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a covariance matrix reproduced from mm direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the time and memory complexity of the sampling to O(mn)O(mn), where nn is the number of decision variables. When nn is large (e.g., nn > 1000), even relatively small values of mm (e.g., m=20,30m=20,30) are sufficient to efficiently solve fully non-separable problems and to reduce the overall run-time.Comment: Genetic and Evolutionary Computation Conference (GECCO'2014) (2014

    Benchmarking five global optimization approaches for nano-optical shape optimization and parameter reconstruction

    Full text link
    Numerical optimization is an important tool in the field of computational physics in general and in nano-optics in specific. It has attracted attention with the increase in complexity of structures that can be realized with nowadays nano-fabrication technologies for which a rational design is no longer feasible. Also, numerical resources are available to enable the computational photonic material design and to identify structures that meet predefined optical properties for specific applications. However, the optimization objective function is in general non-convex and its computation remains resource demanding such that the right choice for the optimization method is crucial to obtain excellent results. Here, we benchmark five global optimization methods for three typical nano-optical optimization problems: \removed{downhill simplex optimization, the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm, particle swarm optimization, differential evolution, and Bayesian optimization} \added{particle swarm optimization, differential evolution, and Bayesian optimization as well as multi-start versions of downhill simplex optimization and the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm}. In the shown examples from the field of shape optimization and parameter reconstruction, Bayesian optimization, mainly known from machine learning applications, obtains significantly better results in a fraction of the run times of the other optimization methods.Comment: 11 pages, 4 figure
    corecore