39,639 research outputs found

    Evolved parameterized selection for evolutionary algorithms

    Get PDF
    Selection functions enable Evolutionary Algorithms (EAs) to apply selection pressure to a population of individuals, by regulating the probability that an individual\u27s genes survive, typically based on fitness. Various conventional fitness based selection functions exist, each providing a unique method of selecting individuals based on their fitness, fitness ranking within the population, and/or various other factors. However, the full space of selection algorithms is only limited by max algorithm size, and each possible selection algorithm is optimal for some EA configuration applied to a particular problem class. Therefore, improved performance is likely to be obtained by tuning an EA\u27s selection algorithm to the problem at hand, rather than employing a conventional selection function. This thesis details an investigation of the extent to which performance can be improved by tuning the selection algorithm. We do this by employing a Hyper-heuristic to explore the space of algorithms which determine the methods used to select individuals from the population. We show, with both a conventional EA and a Covariance Matrix Adaptation Evolutionary Strategy, the increase in performance obtained with a tuned selection algorithm, versus conventional selection functions. Specifically, we measure performance on instances from several benchmark problem classes, including separate testing instances to show generalization of the improved performance. This thesis consists of work that was presented at the Genetic and Evolutionary Computation Conference (GECCO) in 2018, as well as work that will be submitted to GECCO in 2019 --Abstract, page iii

    Tuning of the structure and parameters of a neural network using an improved genetic algorithm

    Full text link
    This paper presents the tuning of the structure and parameters of a neural network using an improved genetic algorithm (GA). It will also be shown that the improved GA performs better than the standard GA based on some benchmark test functions. A neural network with switches introduced to its links is proposed. By doing this, the proposed neural network can learn both the input-output relationships of an application and the network structure using the improved GA. The number of hidden nodes is chosen manually by increasing it from a small number until the learning performance in terms of fitness value is good enough. Application examples on sunspot forecasting and associative memory are given to show the merits of the improved GA and the proposed neural network

    A modified flower pollination algorithm and carnivorous plant algorithm for solving engineering optimization problem

    Get PDF
    Optimization in an essential element in mechanical engineering and has never been an easy task. Hence, using an effective optimiser to solve these problems with high complexity is important. In this study, two metaheuristic algorithms, namely, modified flower pollination algorithm (MFPA) and carnivorous plant algorithm (CPA), were proposed. Flower pollination algorithm (FPA) is a biomimicry optimisation algorithm inspired by natural pollination. Although FPA has shown better convergence than particle swarm optimisation and genetic algorithm in the pioneering study, improving the convergence characteristic of FPA still needs more work. To speed up the convergence, modifications of: (i) employing chaos theory in the initialisation of initial population to enhance the diversity of the initial population in the search space, (ii) replacing FPA’s local search strategy with frog leaping algorithm to improve intensification, and (iii) integrating inertia weight into FPA’s global search strategy to adjust the searching ability of the global strategy, were presented. CPA, on the other hand, was developed based on the inspiration from how carnivorous plants adapt to survive in harsh environments. Both MFPA and CPA were first evaluated using twenty-five well-known benchmark functions with different characteristics and seven Congress on Evolutionary Computation (CEC) 2017 test functions. Their convergence characteristic and computational efficiency were analysed and compared with eight widely used metaheuristic algorithms, with the superiority validated using the Wilcoxon signed-rank test. The applicability of MFPA and CPA were further examined on eighteen mechanical engineering design problems and two challenging real-world applications of controlling the orientation of a five-degrees-of-freedom robotic arm and moving-object tracking in a complicated environment. For the optimisation of classical benchmark functions, CPA was ranked first. It also obtained the first rank in CEC04 and CEC07 modern test functions. Both CPA and MFPA showed promising results on the mechanical engineering design problems. CPA improved over the particle swarm optimisation algorithm in terms of the best fitness value by 69.40-95.99% in the optimisation of the robotic arm. Meanwhile, MFPA demonstrated a better tracking performance in the considered case studies by at least 52.99% better fitness function evaluation and fewer number of function evaluations as compared with the competitors

    Generalized Hybrid Evolutionary Algorithm Framework with a Mutation Operator Requiring no Adaptation

    Get PDF
    This paper presents a generalized hybrid evolutionary optimization structure that not only combines both nondeterministic and deterministic algorithms on their individual merits and distinct advantages, but also offers behaviors of the three originating classes of evolutionary algorithms (EAs). In addition, a robust mutation operator is developed in place of the necessity of mutation adaptation, based on the mutation properties of binary-coded individuals in a genetic algorithm. The behaviour of this mutation operator is examined in full and its performance is compared with adaptive mutations. The results show that the new mutation operator outperforms adaptive mutation operators while reducing complications of extra adaptive parameters in an EA representation

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    Solving the G-problems in less than 500 iterations: Improved efficient constrained optimization by surrogate modeling and adaptive parameter control

    Get PDF
    Constrained optimization of high-dimensional numerical problems plays an important role in many scientific and industrial applications. Function evaluations in many industrial applications are severely limited and no analytical information about objective function and constraint functions is available. For such expensive black-box optimization tasks, the constraint optimization algorithm COBRA was proposed, making use of RBF surrogate modeling for both the objective and the constraint functions. COBRA has shown remarkable success in solving reliably complex benchmark problems in less than 500 function evaluations. Unfortunately, COBRA requires careful adjustment of parameters in order to do so. In this work we present a new self-adjusting algorithm SACOBRA, which is based on COBRA and capable to achieve high-quality results with very few function evaluations and no parameter tuning. It is shown with the help of performance profiles on a set of benchmark problems (G-problems, MOPTA08) that SACOBRA consistently outperforms any COBRA algorithm with fixed parameter setting. We analyze the importance of the several new elements in SACOBRA and find that each element of SACOBRA plays a role to boost up the overall optimization performance. We discuss the reasons behind and get in this way a better understanding of high-quality RBF surrogate modeling
    corecore