674,652 research outputs found
Fast multi-swarm optimization for dynamic optimization problems
This article is posted here with permission of IEEE - Copyright @ 2008 IEEEIn the real world, many applications are non-stationary optimization problems. This requires that the optimization algorithms need to not only find the global optimal solution but also track the trajectory of the changing global best solution in a dynamic environment. To achieve this, this paper proposes a multi-swarm algorithm based on fast particle swarm optimization for dynamic optimization problems. The algorithm employs a mechanism to track multiple peaks by preventing overcrowding at a peak and a fast particle swarm optimization algorithm as a local search method to find the near optimal solutions in a local promising region in the search space. The moving peaks benchmark function is used to test the performance of the proposed algorithm. The numerical experimental results show the efficiency of the proposed algorithm for dynamic optimization problems
Triggered memory-based swarm optimization in dynamic environments
This is a post-print version of this article - Copyright @ 2007 Springer-VerlagIn recent years, there has been an increasing concern from the evolutionary computation community on dynamic optimization problems since many real-world optimization problems are time-varying. In this paper, a triggered memory scheme is introduced into the particle swarm optimization to deal with dynamic environments. The triggered memory scheme enhances traditional memory scheme with a triggered memory generator. Experimental study over a benchmark dynamic problem shows that the triggered memory-based particle swarm optimization algorithm has stronger robustness and adaptability than traditional particle swarm optimization algorithms, both with and without traditional memory scheme, for dynamic optimization problems
Explicit memory schemes for evolutionary algorithms in dynamic environments
Copyright @ 2007 Springer-VerlagProblem optimization in dynamic environments has atrracted a growing interest from the evolutionary computation community in reccent years due to its importance in real world optimization problems. Several approaches have been developed to enhance the performance of evolutionary algorithms for dynamic optimization problems, of which the memory scheme is a major one. This chapter investigates the application of explicit memory schemes for evolutionary algorithms in dynamic environments. Two kinds of explicit memory schemes: direct memory and associative memory, are studied within two classes of evolutionary algorithms: genetic algorithms and univariate marginal distribution algorithms for dynamic optimization problems. Based on a series of systematically constructed dynamic test environments, experiments are carried out to investigate these explicit memory schemes and the performance of direct and associative memory schemes are campared and analysed. The experimental results show the efficiency of the memory schemes for evolutionary algorithms in dynamic environments, especially when the environment changes cyclically. The experimental results also indicate that the effect of the memory schemes depends not only on the dynamic problems and dynamic environments but also on the evolutionary algorithm used
Multicriteria Dynamic Optimization Problems and Cooperative Dynamic Games
We survey some recent research results in the field of dynamic cooperative differential games with non-transferable utilities. Problems which fit into this framework occur for instance if a person has more than one objective he likes to optimize or if several persons decide to combine efforts in trying to realize their individual goals. We assume that all persons act in a dynamic environment and that no side-payments take place. For these kind of problems the notion of Pareto efficiency plays a fundamental role. In economic terms, an allocation in which no one can be made better-off without someone else becoming worseoff is called Pareto efficient. In this paper we present as well necessary as sufficient conditions for existence of a Pareto optimum for general non-convex games. These results are elaborated for the special case that the environment can be modeled by a set of linear differential equations and the objectives can be modeled as functions containing just affine quadratic terms. Furthermore we will consider for these games the convex case. In general there exists a continuum of Pareto solutions and the question arises which of these solutions will be chosen by the participating persons. We will flash some ideas from the axiomatic theory of bargaining, which was initiated by Nash [16, 17], to predict the compromise the persons will reach.Dynamic Optimization;Pareto Efficiency;Cooperative Differential Games;LQ The- ory;Riccati Equations;Bargaining
Evolutionary algorithms for dynamic optimization problems: workshop preface
Copyright @ 2005 AC
A particle swarm optimization based memetic algorithm for dynamic optimization problems
Copyright @ Springer Science + Business Media B.V. 2010.Recently, there has been an increasing concern from the evolutionary computation community on dynamic optimization problems since many real-world optimization problems are dynamic. This paper investigates a particle swarm optimization (PSO) based memetic algorithm that hybridizes PSO with a local search technique for dynamic optimization problems. Within the framework of the proposed algorithm, a local version of PSO with a ring-shape topology structure is used as the global search operator and a fuzzy cognition local search method is proposed as the local search technique. In addition, a self-organized random immigrants scheme is extended into our proposed algorithm in order to further enhance its exploration capacity for new peaks in the search space. Experimental study over the moving peaks benchmark problem shows that the proposed PSO-based memetic algorithm is robust and adaptable in dynamic environments.This work was supported by the National Nature Science Foundation of China (NSFC) under Grant No. 70431003 and Grant No. 70671020, the National Innovation Research Community Science Foundation of China under
Grant No. 60521003, the National Support Plan of China under Grant No. 2006BAH02A09 and the Ministry of Education, science, and Technology in Korea through the Second-Phase of Brain Korea 21 Project in 2009, the Engineering and Physical Sciences Research
Council (EPSRC) of UK under Grant EP/E060722/01 and the Hong Kong Polytechnic University Research Grants under Grant G-YH60
Memory-enhanced univariate marginal distribution algorithms for dynamic optimization problems
Several approaches have been developed into evolutionary algorithms to deal with dynamic optimization problems, of which memory and random immigrants are two major schemes. This paper investigates the application of a direct memory scheme for univariate marginal distribution algorithms (UMDAs), a class of evolutionary algorithms, for dynamic optimization problems. The interaction between memory and random immigrants for UMDAs in dynamic environments is also investigated. Experimental study shows that the memory scheme is efficient for UMDAs in dynamic environments and that the interactive effect between memory and random immigrants for UMDAs in dynamic environments depends on the dynamic environments
Experimental study on population-based incremental learning algorithms for dynamic optimization problems
Copyright @ Springer-Verlag 2005.Evolutionary algorithms have been widely used for stationary optimization problems. However, the environments of real world problems are often dynamic. This seriously challenges traditional evolutionary algorithms. In this paper, the application of population-based incremental learning (PBIL) algorithms, a class of evolutionary algorithms, for dynamic problems is investigated. Inspired by the complementarity mechanism in nature a Dual PBIL is proposed, which operates on two probability vectors that are dual to each other with respect to the central point in the genotype space. A diversity maintaining technique of combining the central probability vector into PBIL is also proposed to improve PBILs adaptability in dynamic environments. In this paper, a new dynamic problem generator that can create required dynamics from any binary-encoded stationary problem is also formalized. Using this generator, a series of dynamic problems were systematically constructed from several benchmark stationary problems and an experimental study was carried out to compare the performance of several PBIL algorithms and two variants of standard genetic algorithm. Based on the experimental results, we carried out algorithm performance analysis regarding the weakness and strength of studied PBIL algorithms and identified several potential improvements to PBIL for dynamic optimization problems.This work was was supported by
UK EPSRC under Grant GR/S79718/01
- …
