44 research outputs found

    A New Metaheuristic Bat-Inspired Algorithm

    Full text link
    Metaheuristic algorithms such as particle swarm optimization, firefly algorithm and harmony search are now becoming powerful methods for solving many tough optimization problems. In this paper, we propose a new metaheuristic method, the Bat Algorithm, based on the echolocation behaviour of bats. We also intend to combine the advantages of existing algorithms into the new bat algorithm. After a detailed formulation and explanation of its implementation, we will then compare the proposed algorithm with other existing algorithms, including genetic algorithms and particle swarm optimization. Simulations show that the proposed algorithm seems much superior to other algorithms, and further studies are also discussed.Comment: 10 pages, 2 figure

    A generalized approach to construct benchmark problems for dynamic optimization

    Get PDF
    Copyright @ Springer-Verlag Berlin Heidelberg 2008.There has been a growing interest in studying evolutionary algorithms in dynamic environments in recent years due to its importance in real applications. However, different dynamic test problems have been used to test and compare the performance of algorithms. This paper proposes a generalized dynamic benchmark generator (GDBG) that can be instantiated into the binary space, real space and combinatorial space. This generator can present a set of different properties to test algorithms by tuning some control parameters. Some experiments are carried out on the real space to study the performance of the generator.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/1

    Benchmark generator for CEC 2009 competition on dynamic optimization

    Get PDF
    Evolutionary algorithms(EAs) have been widely applied to solve stationary optimization problems. However, many real-world applications are actually dynamic. In order to study the performance of EAs in dynamic environments, one important task is to develop proper dynamic benchmark problems. Over the years, researchers have applied a number of dynamic test problems to compare the performance of EAs in dynamic environments, e.g., the ā€œmoving peaks ā€ benchmark (MPB) proposed by Branke [1], the DF1 generator introduced by Morrison and De Jong [6], the singleand multi-objective dynamic test problem generator by dynamically combining different objective functions of exiting stationary multi-objective benchmark problems suggested by Jin and Sendhoff [2], Yang and Yaoā€™s exclusive-or (XOR) operator [10, 11, 12], Kangā€™s dynamic traveling salesman problem (DTSP) [3] and dynamic multi knapsack problem (DKP), etc. Though a number of DOP generators exist in the literature, there is no unified approach of constructing dynamic problems across the binary space, real space and combinatorial space so far. This report uses the generalized dynamic benchmark generator (GDBG) proposed in [4], which construct dynamic environments for all the three solution spaces. Especially, in the rea

    An adaptive learning particle swarm optimizer for function optimization

    Get PDF
    This article is posted here with permission of the IEEE - Copyright @ 2009 IEEETraditional particle swarm optimization (PSO) suffers from the premature convergence problem, which usually results in PSO being trapped in local optima. This paper presents an adaptive learning PSO (ALPSO) based on a variant PSO learning strategy. In ALPSO, the learning mechanism of each particle is separated into three parts: its own historical best position, the closest neighbor and the global best one. By using this individual level adaptive technique, a particle can well guide its behavior of exploration and exploitation. A set of 21 test functions were used including un-rotated, rotated and composition functions to test the performance of ALPSO. From the comparison results over several variant PSO algorithms, ALPSO shows an outstanding performance on most test functions, especially the fast convergence characteristic.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom under Grant EP/E060722/1

    Handling boundary constraints for particle swarm optimization in high-dimensional search space

    Get PDF
    Despite the fact that the popular particle swarm optimizer (PSO) is currently being extensively applied to many real-world problems that often have high-dimensional and complex fitness landscapes, the effects of boundary constraints on PSO have not attracted adequate attention in the literature. However, in accordance with the theoretical analysis in [11], our numerical experiments show that particles tend to fly outside of the boundary in the first few iterations at a very high probability in high-dimensional search spaces. Consequently, the method used to handle boundary violations is critical to the performance of PSO. In this study, we reveal that the widely used random and absorbing bound-handling schemes may paralyze PSO for high-dimensional and complex problems. We also explore in detail the distinct mechanisms responsible for the failures of these two bound-handling schemes. Finally, we suggest that using high-dimensional and complex benchmark functions, such as the composition functions in [19], is a prerequisite to identifying the potential problems in applying PSO to many real-world applications because certain properties of standard benchmark functions make problems inexplicit. Ā© 2011 Elsevier Inc. All rights reserved

    A new evolutionary search strategy for global optimization of high-dimensional problems

    Get PDF
    Global optimization of high-dimensional problems in practical applications remains a major challenge to the research community of evolutionary computation. The weakness of randomization-based evolutionary algorithms in searching high-dimensional spaces is demonstrated in this paper. A new strategy, SP-UCI is developed to treat complexity caused by high dimensionalities. This strategy features a slope-based searching kernel and a scheme of maintaining the particle population's capability of searching over the full search space. Examinations of this strategy on a suite of sophisticated composition benchmark functions demonstrate that SP-UCI surpasses two popular algorithms, particle swarm optimizer (PSO) and differential evolution (DE), on high-dimensional problems. Experimental results also corroborate the argument that, in high-dimensional optimization, only problems with well-formative fitness landscapes are solvable, and slope-based schemes are preferable to randomization-based ones. Ā© 2011 Elsevier Inc. All rights reserved

    Feedback learning particle swarm optimization

    Get PDF
    This is the authorā€™s version of a work that was accepted for publication in Applied Soft Computing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published and is available at the link below - Copyright @ Elsevier 2011In this paper, a feedback learning particle swarm optimization algorithm with quadratic inertia weight (FLPSO-QIW) is developed to solve optimization problems. The proposed FLPSO-QIW consists of four steps. Firstly, the inertia weight is calculated by a designed quadratic function instead of conventional linearly decreasing function. Secondly, acceleration coefficients are determined not only by the generation number but also by the search environment described by each particleā€™s history best fitness information. Thirdly, the feedback fitness information of each particle is used to automatically design the learning probabilities. Fourthly, an elite stochastic learning (ELS) method is used to refine the solution. The FLPSO-QIW has been comprehensively evaluated on 18 unimodal, multimodal and composite benchmark functions with or without rotation. Compared with various state-of-the-art PSO algorithms, the performance of FLPSO-QIW is promising and competitive. The effects of parameter adaptation, parameter sensitivity and proposed mechanism are discussed in detail.This research was partially supported by the National Natural Science Foundation of PR China (Grant No 60874113), the Research Fund for the Doctoral Program of Higher Education (Grant No 200802550007), the Key Creative Project of Shanghai Education Community (Grant No 09ZZ66), the Key Foundation Project of Shanghai(Grant No 09JC1400700), the International Science and Technology Cooperation Project of China under Grant 2009DFA32050, and the Alexander von Humboldt Foundation of Germany

    Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL) optimization framework

    Get PDF
    Simplicity and flexibility of meta-heuristic optimization algorithms have attracted lots of attention in the field of optimization. Different optimization methods, however, hold algorithm-specific strengths and limitations, and selecting the best-performing algorithm for a specific problem is a tedious task. We introduce a new hybrid optimization framework, entitled Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL), which combines the strengths of different evolutionary algorithms (EAs) in a parallel computing scheme. SC-SAHEL explores performance of different EAs, such as the capability to escape local attractions, speed, convergence, etc., during population evolution as each individual EA suits differently to various response surfaces. The SC-SAHEL algorithm is benchmarked over 29 conceptual test functions, and a real-world hydropower reservoir model case study. Results show that the hybrid SC-SAHEL algorithm is rigorous and effective in finding global optimum for a majority of test cases, and that it is computationally efficient in comparison to algorithms with individual EA

    The Barrier Tree Benchmark: Many Basins and Double Funnels

    Get PDF
    The Barrier Tree Benchmark (BTB) is a principled generator of continuous real-valued landscapes: problems of known topography/critical point structure can be systematically designed and deployed in algorithm comparison studies. A previous BTB study focused on a single funnel and a double basin. This work demonstrates algorithm performance on BTB instances with many basins, and on double funnels. A methodology for principled algorithm comparison on families of problems of similar complexity and structure is proposed. It is hoped that the BTB will address a parameter tuning pathology of current problem benchmarks, namely, that common optimisation algorithms require widely different control parameter settings for optimal performance on differing problem classes. This pathology is traced to the irregular and arbitrary composition of standard benchmarks
    corecore