68,151 research outputs found

    Discovering Attention-Based Genetic Algorithms via Meta-Black-Box Optimization

    Full text link
    Genetic algorithms constitute a family of black-box optimization algorithms, which take inspiration from the principles of biological evolution. While they provide a general-purpose tool for optimization, their particular instantiations can be heuristic and motivated by loose biological intuition. In this work we explore a fundamentally different approach: Given a sufficiently flexible parametrization of the genetic operators, we discover entirely new genetic algorithms in a data-driven fashion. More specifically, we parametrize selection and mutation rate adaptation as cross- and self-attention modules and use Meta-Black-Box-Optimization to evolve their parameters on a set of diverse optimization tasks. The resulting Learned Genetic Algorithm outperforms state-of-the-art adaptive baseline genetic algorithms and generalizes far beyond its meta-training settings. The learned algorithm can be applied to previously unseen optimization problems, search dimensions & evaluation budgets. We conduct extensive analysis of the discovered operators and provide ablation experiments, which highlight the benefits of flexible module parametrization and the ability to transfer (`plug-in') the learned operators to conventional genetic algorithms.Comment: 14 pages, 31 figure

    Improving Usability of Genetic Algorithms through Self Adaptation on Static and Dynamic Environments

    Get PDF
    We propose a self-adaptive genetic algorithm, called SAGA, for the purposes of improving the usability of genetic algorithms on both static and dynamic problems. Self-adaption can improve usability by automating some of the parameter tuning for the algorithm, a difficult and time-consuming process on canonical genetic algorithms. Reducing or simplifying the need for parameter tuning will help towards making genetic algorithms a more attractive tool for those who are not experts in the field of evolutionary algorithms, allowing more people to take advantage of the problem solving capabilities of a genetic algorithm on real-world problems. We test SAGA and analyze its the behavior on a variety of problems. First we test on static test problems, where our focus is on usability improvements as measured by the number of parameter configurations to tune and the number of fitness evaluations conducted. On the static problems, SAGA is compared to a canonical genetic algorithm. Next, we test on dynamic test problems, where the fitness landscape varies over the course of the problem\u27s execution. The dynamic problems allows us to examine whether self-adaptation can effectively react to ever-changing and unpredictable problems. On the dynamic problems, we compare to a canonical genetic algorithm as well as other genetic algorithm methods that are designed or utilized specifically for dynamic problems. Finally, we test on a real-world problem pertaining to Medicare Fee-For-Service payments in order to validate the real-world usefulness of SAGA. For this real-world problem, we compare SAGA to both a canonical genetic algorithm and logistic regression, the standard method for this problem in the field of healthcare informatics. We find that this self-adaptive genetic algorithm is successful at improving usability through a large reduction of parameter tuning while maintaining equal or superior results on a majority of the problems tested. The large reduction of parameter tuning translates to large time savings for users of SAGA. Furthermore, self-adaptation proves to be a very capable mechanisms for dealing with the difficulties of dynamic environment problems as observed by the changes to parameters in response to changes in the fitness landscape of the problem

    Multi-population methods with adaptive mutation for multi-modal optimization problems

    Get PDF
    open access journalThis paper presents an efficient scheme to locate multiple peaks on multi-modal optimization problems by using genetic algorithms (GAs). The premature convergence problem shows due to the loss of diversity, the multi-population technique can be applied to maintain the diversity in the population and the convergence capacity of GAs. The proposed scheme is the combination of multi-population with adaptive mutation operator, which determines two different mutation probabilities for different sites of the solutions. The probabilities are updated by the fitness and distribution of solutions in the search space during the evolution process. The experimental results demonstrate the performance of the proposed algorithm based on a set of benchmark problems in comparison with relevant algorithms

    A comparative study of adaptive mutation operators for metaheuristics

    Get PDF
    Genetic algorithms (GAs) are a class of stochastic optimization methods inspired by the principles of natural evolution. Adaptation of strategy parameters and genetic operators has become an important and promising research area in GAs. Many researchers are applying adaptive techniques to guide the search of GAs toward optimum solutions. Mutation is a key component of GAs. It is a variation operator to create diversity for GAs. This paper investigates several adaptive mutation operators, including population level adaptive mutation operators and gene level adaptive mutation operators, for GAs and compares their performance based on a set of uni-modal and multi-modal benchmark problems. The experimental results show that the gene level adaptive mutation operators are usually more efficient than the population level adaptive mutation operators for GAs

    Neutrality: A Necessity for Self-Adaptation

    Full text link
    Self-adaptation is used in all main paradigms of evolutionary computation to increase efficiency. We claim that the basis of self-adaptation is the use of neutrality. In the absence of external control neutrality allows a variation of the search distribution without the risk of fitness loss.Comment: 6 pages, 3 figures, LaTe

    An Improved NSGA-II and its Application for Reconfigurable Pixel Antenna Design

    Get PDF
    Based on the elitist non-dominated sorting genetic algorithm (NSGA-II) for multi-objective optimization problems, an improved scheme with self-adaptive crossover and mutation operators is proposed to obtain good optimization performance in this paper. The performance of the improved NSGA-II is demonstrated with a set of test functions and metrics taken from the standard literature on multi-objective optimization. Combined with the HFSS solver, one pixel antenna with reconfigurable radiation patterns, which can steer its beam into six different directions (θDOA = ± 15°, ± 30°, ± 50°) with a 5 % overlapping impedance bandwidth (S11 < − 10 dB) and a realized gain over 6 dB, is designed by the proposed self-adaptive NSGA-II

    Self-adaptive exploration in evolutionary search

    Full text link
    We address a primary question of computational as well as biological research on evolution: How can an exploration strategy adapt in such a way as to exploit the information gained about the problem at hand? We first introduce an integrated formalism of evolutionary search which provides a unified view on different specific approaches. On this basis we discuss the implications of indirect modeling (via a ``genotype-phenotype mapping'') on the exploration strategy. Notions such as modularity, pleiotropy and functional phenotypic complex are discussed as implications. Then, rigorously reflecting the notion of self-adaptability, we introduce a new definition that captures self-adaptability of exploration: different genotypes that map to the same phenotype may represent (also topologically) different exploration strategies; self-adaptability requires a variation of exploration strategies along such a ``neutral space''. By this definition, the concept of neutrality becomes a central concern of this paper. Finally, we present examples of these concepts: For a specific grammar-type encoding, we observe a large variability of exploration strategies for a fixed phenotype, and a self-adaptive drift towards short representations with highly structured exploration strategy that matches the ``problem's structure''.Comment: 24 pages, 5 figure

    Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings

    Full text link
    While evolutionary algorithms are known to be very successful for a broad range of applications, the algorithm designer is often left with many algorithmic choices, for example, the size of the population, the mutation rates, and the crossover rates of the algorithm. These parameters are known to have a crucial influence on the optimization time, and thus need to be chosen carefully, a task that often requires substantial efforts. Moreover, the optimal parameters can change during the optimization process. It is therefore of great interest to design mechanisms that dynamically choose best-possible parameters. An example for such an update mechanism is the one-fifth success rule for step-size adaption in evolutionary strategies. While in continuous domains this principle is well understood also from a mathematical point of view, no comparable theory is available for problems in discrete domains. In this work we show that the one-fifth success rule can be effective also in discrete settings. We regard the (1+(λ,λ))(1+(\lambda,\lambda))~GA proposed in [Doerr/Doerr/Ebel: From black-box complexity to designing new genetic algorithms, TCS 2015]. We prove that if its population size is chosen according to the one-fifth success rule then the expected optimization time on \textsc{OneMax} is linear. This is better than what \emph{any} static population size λ\lambda can achieve and is asymptotically optimal also among all adaptive parameter choices.Comment: This is the full version of a paper that is to appear at GECCO 201

    Use of Statistical Outlier Detection Method in Adaptive\ud Evolutionary Algorithms

    Get PDF
    In this paper, the issue of adapting probabilities for Evolutionary Algorithm (EA) search operators is revisited. A framework is devised for distinguishing between measurements of performance and the interpretation of those measurements for purposes of adaptation. Several examples of measurements and statistical interpretations are provided. Probability value adaptation is tested using an EA with 10 search operators against 10 test problems with results indicating that both the type of measurement and its statistical interpretation play significant roles in EA performance. We also find that selecting operators based on the prevalence of outliers rather than on average performance is able to provide considerable improvements to\ud adaptive methods and soundly outperforms the non-adaptive\ud case
    corecore