9,459 research outputs found

    Adaptive multimodal continuous ant colony optimization

    Get PDF
    Seeking multiple optima simultaneously, which multimodal optimization aims at, has attracted increasing attention but remains challenging. Taking advantage of ant colony optimization algorithms in preserving high diversity, this paper intends to extend ant colony optimization algorithms to deal with multimodal optimization. First, combined with current niching methods, an adaptive multimodal continuous ant colony optimization algorithm is introduced. In this algorithm, an adaptive parameter adjustment is developed, which takes the difference among niches into consideration. Second, to accelerate convergence, a differential evolution mutation operator is alternatively utilized to build base vectors for ants to construct new solutions. Then, to enhance the exploitation, a local search scheme based on Gaussian distribution is self-adaptively performed around the seeds of niches. Together, the proposed algorithm affords a good balance between exploration and exploitation. Extensive experiments on 20 widely used benchmark multimodal functions are conducted to investigate the influence of each algorithmic component and results are compared with several state-of-the-art multimodal algorithms and winners of competitions on multimodal optimization. These comparisons demonstrate the competitive efficiency and effectiveness of the proposed algorithm, especially in dealing with complex problems with high numbers of local optima

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    Multimodal estimation of distribution algorithms

    Get PDF
    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima

    The Novel Approach of Adaptive Twin Probability for Genetic Algorithm

    Full text link
    The performance of GA is measured and analyzed in terms of its performance parameters against variations in its genetic operators and associated parameters. Since last four decades huge numbers of researchers have been working on the performance of GA and its enhancement. This earlier research work on analyzing the performance of GA enforces the need to further investigate the exploration and exploitation characteristics and observe its impact on the behavior and overall performance of GA. This paper introduces the novel approach of adaptive twin probability associated with the advanced twin operator that enhances the performance of GA. The design of the advanced twin operator is extrapolated from the twin offspring birth due to single ovulation in natural genetic systems as mentioned in the earlier works. The twin probability of this operator is adaptively varied based on the fitness of best individual thereby relieving the GA user from statically defining its value. This novel approach of adaptive twin probability is experimented and tested on the standard benchmark optimization test functions. The experimental results show the increased accuracy in terms of the best individual and reduced convergence time.Comment: 7 pages, International Journal of Advanced Studies in Computer Science and Engineering (IJASCSE), Volume 2, Special Issue 2, 201

    Orthogonal methods based ant colony search for solving continuous optimization problems

    Get PDF
    Research into ant colony algorithms for solving continuous optimization problems forms one of the most significant and promising areas in swarm computation. Although traditional ant algorithms are designed for combinatorial optimization, they have shown great potential in solving a wide range of optimization problems, including continuous optimization. Aimed at solving continuous problems effectively, this paper develops a novel ant algorithm termed "continuous orthogonal ant colony" (COAC), whose pheromone deposit mechanisms would enable ants to search for solutions collaboratively and effectively. By using the orthogonal design method, ants in the feasible domain can explore their chosen regions rapidly and e±ciently. By implementing an "adaptive regional radius" method, the proposed algorithm can reduce the probability of being trapped in local optima and therefore enhance the global search capability and accuracy. An elitist strategy is also employed to reserve the most valuable points. The performance of the COAC is compared with two other ant algorithms for continuous optimization of API and CACO by testing seventeen functions in the continuous domain. The results demonstrate that the proposed COAC algorithm outperforms the others
    corecore