9,630 research outputs found

    A memetic particle swarm optimisation algorithm for dynamic multi-modal optimisation problems

    Get PDF
    Copyright @ 2011 Taylor & Francis.Many real-world optimisation problems are both dynamic and multi-modal, which require an optimisation algorithm not only to find as many optima under a specific environment as possible, but also to track their moving trajectory over dynamic environments. To address this requirement, this article investigates a memetic computing approach based on particle swarm optimisation for dynamic multi-modal optimisation problems (DMMOPs). Within the framework of the proposed algorithm, a new speciation method is employed to locate and track multiple peaks and an adaptive local search method is also hybridised to accelerate the exploitation of species generated by the speciation method. In addition, a memory-based re-initialisation scheme is introduced into the proposed algorithm in order to further enhance its performance in dynamic multi-modal environments. Based on the moving peaks benchmark problems, experiments are carried out to investigate the performance of the proposed algorithm in comparison with several state-of-the-art algorithms taken from the literature. The experimental results show the efficiency of the proposed algorithm for DMMOPs.This work was supported by the Key Program of National Natural Science Foundation (NNSF) of China under Grant no. 70931001, the Funds for Creative Research Groups of China under Grant no. 71021061, the National Natural Science Foundation (NNSF) of China under Grant 71001018, Grant no. 61004121 and Grant no. 70801012 and the Fundamental Research Funds for the Central Universities Grant no. N090404020, the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant no. EP/E060722/01 and Grant EP/E060722/02, and the Hong Kong Polytechnic University under Grant G-YH60

    A similarity-based cooperative co-evolutionary algorithm for dynamic interval multi-objective optimization problems

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Dynamic interval multi-objective optimization problems (DI-MOPs) are very common in real-world applications. However, there are few evolutionary algorithms that are suitable for tackling DI-MOPs up to date. A framework of dynamic interval multi-objective cooperative co-evolutionary optimization based on the interval similarity is presented in this paper to handle DI-MOPs. In the framework, a strategy for decomposing decision variables is first proposed, through which all the decision variables are divided into two groups according to the interval similarity between each decision variable and interval parameters. Following that, two sub-populations are utilized to cooperatively optimize decision variables in the two groups. Furthermore, two response strategies, rgb0.00,0.00,0.00i.e., a strategy based on the change intensity and a random mutation strategy, are employed to rapidly track the changing Pareto front of the optimization problem. The proposed algorithm is applied to eight benchmark optimization instances rgb0.00,0.00,0.00as well as a multi-period portfolio selection problem and compared with five state-of-the-art evolutionary algorithms. The experimental results reveal that the proposed algorithm is very competitive on most optimization instances

    A Survey of Evolutionary Continuous Dynamic Optimization Over Two Decades:Part B

    Get PDF
    Many real-world optimization problems are dynamic. The field of dynamic optimization deals with such problems where the search space changes over time. In this two-part paper, we present a comprehensive survey of the research in evolutionary dynamic optimization for single-objective unconstrained continuous problems over the last two decades. In Part A of this survey, we propose a new taxonomy for the components of dynamic optimization algorithms, namely, convergence detection, change detection, explicit archiving, diversity control, and population division and management. In comparison to the existing taxonomies, the proposed taxonomy covers some additional important components, such as convergence detection and computational resource allocation. Moreover, we significantly expand and improve the classifications of diversity control and multi-population methods, which are under-represented in the existing taxonomies. We then provide detailed technical descriptions and analysis of different components according to the suggested taxonomy. Part B of this survey provides an indepth analysis of the most commonly used benchmark problems, performance analysis methods, static optimization algorithms used as the optimization components in the dynamic optimization algorithms, and dynamic real-world applications. Finally, several opportunities for future work are pointed out

    Adaptive primal-dual genetic algorithms in dynamic environments

    Get PDF
    This article is placed here with permission of IEEE - Copyright @ 2010 IEEERecently, there has been an increasing interest in applying genetic algorithms (GAs) in dynamic environments. Inspired by the complementary and dominance mechanisms in nature, a primal-dual GA (PDGA) has been proposed for dynamic optimization problems (DOPs). In this paper, an important operator in PDGA, i.e., the primal-dual mapping (PDM) scheme, is further investigated to improve the robustness and adaptability of PDGA in dynamic environments. In the improved scheme, two different probability-based PDM operators, where the mapping probability of each allele in the chromosome string is calculated through the statistical information of the distribution of alleles in the corresponding gene locus over the population, are effectively combined according to an adaptive Lamarckian learning mechanism. In addition, an adaptive dominant replacement scheme, which can probabilistically accept inferior chromosomes, is also introduced into the proposed algorithm to enhance the diversity level of the population. Experimental results on a series of dynamic problems generated from several stationary benchmark problems show that the proposed algorithm is a good optimizer for DOPs.This work was supported in part by the National Nature Science Foundation of China (NSFC) under Grant 70431003 and Grant 70671020, by the National Innovation Research Community Science Foundation of China under Grant 60521003, by the National Support Plan of China under Grant 2006BAH02A09, by the Engineering and Physical Sciences Research Council (EPSRC) of U.K. under Grant EP/E060722/1, and by the Hong Kong Polytechnic University Research Grants under Grant G-YH60

    Vector Autoregressive Evolution for Dynamic Multi-Objective Optimisation

    Full text link
    Dynamic multi-objective optimisation (DMO) handles optimisation problems with multiple (often conflicting) objectives in varying environments. Such problems pose various challenges to evolutionary algorithms, which have popularly been used to solve complex optimisation problems, due to their dynamic nature and resource restrictions in changing environments. This paper proposes vector autoregressive evolution (VARE) consisting of vector autoregression (VAR) and environment-aware hypermutation to address environmental changes in DMO. VARE builds a VAR model that considers mutual relationship between decision variables to effectively predict the moving solutions in dynamic environments. Additionally, VARE introduces EAH to address the blindness of existing hypermutation strategies in increasing population diversity in dynamic scenarios where predictive approaches are unsuitable. A seamless integration of VAR and EAH in an environment-adaptive manner makes VARE effective to handle a wide range of dynamic environments and competitive with several popular DMO algorithms, as demonstrated in extensive experimental studies. Specially, the proposed algorithm is computationally 50 times faster than two widely-used algorithms (i.e., TrDMOEA and MOEA/D-SVR) while producing significantly better results

    Solving Dynamic Multi-objective Optimization Problems Using Incremental Support Vector Machine

    Full text link
    The main feature of the Dynamic Multi-objective Optimization Problems (DMOPs) is that optimization objective functions will change with times or environments. One of the promising approaches for solving the DMOPs is reusing the obtained Pareto optimal set (POS) to train prediction models via machine learning approaches. In this paper, we train an Incremental Support Vector Machine (ISVM) classifier with the past POS, and then the solutions of the DMOP we want to solve at the next moment are filtered through the trained ISVM classifier. A high-quality initial population will be generated by the ISVM classifier, and a variety of different types of population-based dynamic multi-objective optimization algorithms can benefit from the population. To verify this idea, we incorporate the proposed approach into three evolutionary algorithms, the multi-objective particle swarm optimization(MOPSO), Nondominated Sorting Genetic Algorithm II (NSGA-II), and the Regularity Model-based multi-objective estimation of distribution algorithm(RE-MEDA). We employ experiments to test these algorithms, and experimental results show the effectiveness.Comment: 6 page

    A memetic ant colony optimization algorithm for the dynamic travelling salesman problem

    Get PDF
    Copyright @ Springer-Verlag 2010.Ant colony optimization (ACO) has been successfully applied for combinatorial optimization problems, e.g., the travelling salesman problem (TSP), under stationary environments. In this paper, we consider the dynamic TSP (DTSP), where cities are replaced by new ones during the execution of the algorithm. Under such environments, traditional ACO algorithms face a serious challenge: once they converge, they cannot adapt efficiently to environmental changes. To improve the performance of ACO on the DTSP, we investigate a hybridized ACO with local search (LS), called Memetic ACO (M-ACO) algorithm, which is based on the population-based ACO (P-ACO) framework and an adaptive inver-over operator, to solve the DTSP. Moreover, to address premature convergence, we introduce random immigrants to the population of M-ACO when identical ants are stored. The simulation experiments on a series of dynamic environments generated from a set of benchmark TSP instances show that LS is beneficial for ACO algorithms when applied on the DTSP, since it achieves better performance than other traditional ACO and P-ACO algorithms.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/01 and Grant EP/E060722/02
    corecore