472 research outputs found

    Cultural Algorithm based on Decomposition to solve Optimization Problems

    Get PDF
    Decomposition is used to solve optimization problems by introducing many simple scalar optimization subproblems and optimizing them simultaneously. Dynamic Multi-Objective Optimization Problems (DMOP) have several objective functions and constraints that vary over time. As a consequence of such dynamic changes, the optimal solutions may vary over time, affecting the performance of convergence. In this thesis, we propose a new Cultural Algorithm (CA) based on decomposition (CA/D). The objective of the CA/D algorithm is to decompose DMOP into a number of subproblems that can be optimized using the information shared by neighboring problems. The proposed CA/D approach is evaluated using a number of CEC 2015 optimization benchmark functions. When compared to CA, Multi-population CA (MPCA), and MPCA incorporating game strategies (MPCA-GS), the results obtained showed that CA/D outperformed them in 7 out of the 15 benchmark functions

    A self-learning particle swarm optimizer for global optimization problems

    Get PDF
    Copyright @ 2011 IEEE. All Rights Reserved. This article was made available through the Brunel Open Access Publishing Fund.Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.This work was supported by the Engineering and Physical Sciences Research Council of U.K. under Grants EP/E060722/1 and EP/E060722/2

    Study the Effects of Multilevel Selection in Multi-Population Cultural Algorithm

    Get PDF
    This is a study on the effects of multilevel selection (MLS) theory in optimizing numerical functions. Based on this theory, a new architecture for Multi-Population Cultural Algorithm is proposed which incorporates a new multilevel selection framework (ML-MPCA). The approach used in this paper is based on biological group selection theory that states natural selection acts collectively on all the members of a given group. The effects of cooperation are studied using n-player prisoner’s dilemma. In this game, N individuals are randomly divided into m groups and individuals independently choose to be either cooperator or defector. A two-level selection process is introduced namely within group selection and between group selection. Individuals interact with the other members of the group in an evolutionary game that determines their fitness. The principal idea behind incorporating this multilevel selection model is to avoid premature convergence and to escape from local optima and for better exploration of the search space. We test our algorithm using the CEC 2015 expensive benchmark functions to evaluate its performance. These problems are a set of 15 functions which includes varied function categories. We show that our proposed algorithm improves solution accuracy and consistency. For 10 dimensional problems, the proposed method has 8 out 15 better results and for 30-dimensional problems we have 11 out of 15 better results when compared to the existing algorithms. The proposed model can be extended to more than two levels of selection and can also include migration

    Generalized decomposition and cross entropy methods for many-objective optimization

    Get PDF
    Decomposition-based algorithms for multi-objective optimization problems have increased in popularity in the past decade. Although their convergence to the Pareto optimal front (PF) is in several instances superior to that of Pareto-based algorithms, the problem of selecting a way to distribute or guide these solutions in a high-dimensional space has not been explored. In this work, we introduce a novel concept which we call generalized decomposition. Generalized decomposition provides a framework with which the decision maker (DM) can guide the underlying evolutionary algorithm toward specific regions of interest or the entire Pareto front with the desired distribution of Pareto optimal solutions. Additionally, it is shown that generalized decomposition simplifies many-objective problems by unifying the three performance objectives of multi-objective evolutionary algorithms – convergence to the PF, evenly distributed Pareto optimal solutions and coverage of the entire front – to only one, that of convergence. A framework, established on generalized decomposition, and an estimation of distribution algorithm (EDA) based on low-order statistics, namely the cross-entropy method (CE), is created to illustrate the benefits of the proposed concept for many objective problems. This choice of EDA also enables the test of the hypothesis that low-order statistics based EDAs can have comparable performance to more elaborate EDAs

    Evolutionary Algorithms with Mixed Strategy

    Get PDF

    A new Taxonomy of Continuous Global Optimization Algorithms

    Full text link
    Surrogate-based optimization, nature-inspired metaheuristics, and hybrid combinations have become state of the art in algorithm design for solving real-world optimization problems. Still, it is difficult for practitioners to get an overview that explains their advantages in comparison to a large number of available methods in the scope of optimization. Available taxonomies lack the embedding of current approaches in the larger context of this broad field. This article presents a taxonomy of the field, which explores and matches algorithm strategies by extracting similarities and differences in their search strategies. A particular focus lies on algorithms using surrogates, nature-inspired designs, and those created by design optimization. The extracted features of components or operators allow us to create a set of classification indicators to distinguish between a small number of classes. The features allow a deeper understanding of components of the search strategies and further indicate the close connections between the different algorithm designs. We present intuitive analogies to explain the basic principles of the search algorithms, particularly useful for novices in this research field. Furthermore, this taxonomy allows recommendations for the applicability of the corresponding algorithms.Comment: 35 pages total, 28 written pages, 4 figures, 2019 Reworked Versio

    The application of genetic algorithms to the adaptation of IIR filters

    Get PDF
    The adaptation of an IIR filter is a very difficult problem due to its non-quadratic performance surface and potential instability. Conventional adaptive IIR algorithms suffer from potential instability problems and a high cost for stability monitoring. Therefore, there is much interest in adaptive IIR filters based on alternative algorithms. Genetic algorithms are a family of search algorithms based on natural selection and genetics. They have been successfully used in many different areas. Genetic algorithms applied to the adaptation of IIR filtering problems are studied in this thesis, and show that the genetic algorithm approach has a number of advantages over conventional gradient algorithms, particularly, for the adaptation of high order adaptive IIR filters, IIR filters with poles close to the unit circle and IIR filters with multi-modal error surfaces. The conventional gradient algorithms have difficulty solving these problems. Coefficient results are presented for various orders of IIR filters in this thesis. In the computer simulations presented in this thesis, the direct, cascade, parallel and lattice form IIR filter structures have been used and compared. The lattice form IIR filter structure shows its superiority over the cascade and parallel form IIR filter structures in terms of its mean square error convergence performance

    Steady-State ALPS for Real-Valued Problems

    Get PDF
    The two objectives of this paper are to describe a steady-state version of the Age-Layered Population Structure (ALPS) Evolutionary Algorithm (EA) and to compare it against other GAs on real-valued problems. Motivation for this work comes from our previous success in demonstrating that a generational version of ALPS greatly improves search performance on a Genetic Programming problem. In making steady-state ALPS some modifications were made to the method for calculating age and the method for moving individuals up layers. To demonstrate that ALPS works well on real-valued problems we compare it against CMA-ES and Differential Evolution (DE) on five challenging, real-valued functions and on one real-world problem. While CMA-ES and DE outperform ALPS on the two unimodal test functions, ALPS is much better on the three multimodal test problems and on the real-world problem. Further examination shows that, unlike the other GAs, ALPS maintains a genotypically diverse population throughout the entire search process. These findings strongly suggest that the ALPS paradigm is better able to avoid premature convergence then the other GAs
    corecore