65,113 research outputs found

    Multi-population methods with adaptive mutation for multi-modal optimization problems

    Get PDF
    open access journalThis paper presents an efficient scheme to locate multiple peaks on multi-modal optimization problems by using genetic algorithms (GAs). The premature convergence problem shows due to the loss of diversity, the multi-population technique can be applied to maintain the diversity in the population and the convergence capacity of GAs. The proposed scheme is the combination of multi-population with adaptive mutation operator, which determines two different mutation probabilities for different sites of the solutions. The probabilities are updated by the fitness and distribution of solutions in the search space during the evolution process. The experimental results demonstrate the performance of the proposed algorithm based on a set of benchmark problems in comparison with relevant algorithms

    Experimental study on population-based incremental learning algorithms for dynamic optimization problems

    Get PDF
    Copyright @ Springer-Verlag 2005.Evolutionary algorithms have been widely used for stationary optimization problems. However, the environments of real world problems are often dynamic. This seriously challenges traditional evolutionary algorithms. In this paper, the application of population-based incremental learning (PBIL) algorithms, a class of evolutionary algorithms, for dynamic problems is investigated. Inspired by the complementarity mechanism in nature a Dual PBIL is proposed, which operates on two probability vectors that are dual to each other with respect to the central point in the genotype space. A diversity maintaining technique of combining the central probability vector into PBIL is also proposed to improve PBILs adaptability in dynamic environments. In this paper, a new dynamic problem generator that can create required dynamics from any binary-encoded stationary problem is also formalized. Using this generator, a series of dynamic problems were systematically constructed from several benchmark stationary problems and an experimental study was carried out to compare the performance of several PBIL algorithms and two variants of standard genetic algorithm. Based on the experimental results, we carried out algorithm performance analysis regarding the weakness and strength of studied PBIL algorithms and identified several potential improvements to PBIL for dynamic optimization problems.This work was was supported by UK EPSRC under Grant GR/S79718/01

    Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings

    Full text link
    While evolutionary algorithms are known to be very successful for a broad range of applications, the algorithm designer is often left with many algorithmic choices, for example, the size of the population, the mutation rates, and the crossover rates of the algorithm. These parameters are known to have a crucial influence on the optimization time, and thus need to be chosen carefully, a task that often requires substantial efforts. Moreover, the optimal parameters can change during the optimization process. It is therefore of great interest to design mechanisms that dynamically choose best-possible parameters. An example for such an update mechanism is the one-fifth success rule for step-size adaption in evolutionary strategies. While in continuous domains this principle is well understood also from a mathematical point of view, no comparable theory is available for problems in discrete domains. In this work we show that the one-fifth success rule can be effective also in discrete settings. We regard the (1+(λ,λ))(1+(\lambda,\lambda))~GA proposed in [Doerr/Doerr/Ebel: From black-box complexity to designing new genetic algorithms, TCS 2015]. We prove that if its population size is chosen according to the one-fifth success rule then the expected optimization time on \textsc{OneMax} is linear. This is better than what \emph{any} static population size λ\lambda can achieve and is asymptotically optimal also among all adaptive parameter choices.Comment: This is the full version of a paper that is to appear at GECCO 201

    Dual population-based incremental learning for problem optimization in dynamic environments

    Get PDF
    Copyright @ 2003 Asia Pacific Symposium on Intelligent and Evolutionary SystemsIn recent years there is a growing interest in the research of evolutionary algorithms for dynamic optimization problems since real world problems are usually dynamic, which presents serious challenges to traditional evolutionary algorithms. In this paper, we investigate the application of Population-Based Incremental Learning (PBIL) algorithms, a class of evolutionary algorithms, for problem optimization under dynamic environments. Inspired by the complementarity mechanism in nature, we propose a Dual PBIL that operates on two probability vectors that are dual to each other with respect to the central point in the search space. Using a dynamic problem generating technique we generate a series of dynamic knapsack problems from a randomly generated stationary knapsack problem and carry out experimental study comparing the performance of investigated PBILs and one traditional genetic algorithm. Experimental results show that the introduction of dualism into PBIL improves its adaptability under dynamic environments, especially when the environment is subject to significant changes in the sense of genotype space

    Force-imitated particle swarm optimization using the near-neighbor effect for locating multiple optima

    Get PDF
    Copyright @ Elsevier Inc. All rights reserved.Multimodal optimization problems pose a great challenge of locating multiple optima simultaneously in the search space to the particle swarm optimization (PSO) community. In this paper, the motion principle of particles in PSO is extended by using the near-neighbor effect in mechanical theory, which is a universal phenomenon in nature and society. In the proposed near-neighbor effect based force-imitated PSO (NN-FPSO) algorithm, each particle explores the promising regions where it resides under the composite forces produced by the “near-neighbor attractor” and “near-neighbor repeller”, which are selected from the set of memorized personal best positions and the current swarm based on the principles of “superior-and-nearer” and “inferior-and-nearer”, respectively. These two forces pull and push a particle to search for the nearby optimum. Hence, particles can simultaneously locate multiple optima quickly and precisely. Experiments are carried out to investigate the performance of NN-FPSO in comparison with a number of state-of-the-art PSO algorithms for locating multiple optima over a series of multimodal benchmark test functions. The experimental results indicate that the proposed NN-FPSO algorithm can efficiently locate multiple optima in multimodal fitness landscapes.This work was supported in part by the Key Program of National Natural Science Foundation (NNSF) of China under Grant 70931001, Grant 70771021, and Grant 70721001, the National Natural Science Foundation (NNSF) of China for Youth under Grant 61004121, Grant 70771021, the Science Fund for Creative Research Group of NNSF of China under Grant 60821063, the PhD Programs Foundation of Ministry of Education of China under Grant 200801450008, and in part by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/1 and Grant EP/E060722/2

    Offspring Population Size Matters when Comparing Evolutionary Algorithms with Self-Adjusting Mutation Rates

    Full text link
    We analyze the performance of the 2-rate (1+λ)(1+\lambda) Evolutionary Algorithm (EA) with self-adjusting mutation rate control, its 3-rate counterpart, and a (1+λ)(1+\lambda)~EA variant using multiplicative update rules on the OneMax problem. We compare their efficiency for offspring population sizes ranging up to λ=3,200\lambda=3,200 and problem sizes up to n=100,000n=100,000. Our empirical results show that the ranking of the algorithms is very consistent across all tested dimensions, but strongly depends on the population size. While for small values of λ\lambda the 2-rate EA performs best, the multiplicative updates become superior for starting for some threshold value of λ\lambda between 50 and 100. Interestingly, for population sizes around 50, the (1+λ)(1+\lambda)~EA with static mutation rates performs on par with the best of the self-adjusting algorithms. We also consider how the lower bound pminp_{\min} for the mutation rate influences the efficiency of the algorithms. We observe that for the 2-rate EA and the EA with multiplicative update rules the more generous bound pmin=1/n2p_{\min}=1/n^2 gives better results than pmin=1/np_{\min}=1/n when λ\lambda is small. For both algorithms the situation reverses for large~λ\lambda.Comment: To appear at Genetic and Evolutionary Computation Conference (GECCO'19). v2: minor language revisio

    The Novel Approach of Adaptive Twin Probability for Genetic Algorithm

    Full text link
    The performance of GA is measured and analyzed in terms of its performance parameters against variations in its genetic operators and associated parameters. Since last four decades huge numbers of researchers have been working on the performance of GA and its enhancement. This earlier research work on analyzing the performance of GA enforces the need to further investigate the exploration and exploitation characteristics and observe its impact on the behavior and overall performance of GA. This paper introduces the novel approach of adaptive twin probability associated with the advanced twin operator that enhances the performance of GA. The design of the advanced twin operator is extrapolated from the twin offspring birth due to single ovulation in natural genetic systems as mentioned in the earlier works. The twin probability of this operator is adaptively varied based on the fitness of best individual thereby relieving the GA user from statically defining its value. This novel approach of adaptive twin probability is experimented and tested on the standard benchmark optimization test functions. The experimental results show the increased accuracy in terms of the best individual and reduced convergence time.Comment: 7 pages, International Journal of Advanced Studies in Computer Science and Engineering (IJASCSE), Volume 2, Special Issue 2, 201
    corecore