9 research outputs found

    Autonomous operator management for evolutionary algorithms

    Get PDF
    The performance of an evolutionary algorithm strongly depends on the design of its operators and on the management of these operators along the search; that is, on the ability of the algorithm to balance exploration and exploitation of the search space. Recent approaches automate the tuning and control of the parameters that govern this balance. We propose a new technique to dynamically control the behavior of operators in an EA and to manage a large set of potential operators. The best operators are rewarded by applying them more often. Tests of this technique on instances of 3-SAT return results that are competitive with an algorithm tailored to the problem

    Offspring Population Size Matters when Comparing Evolutionary Algorithms with Self-Adjusting Mutation Rates

    Full text link
    We analyze the performance of the 2-rate (1+λ)(1+\lambda) Evolutionary Algorithm (EA) with self-adjusting mutation rate control, its 3-rate counterpart, and a (1+λ)(1+\lambda)~EA variant using multiplicative update rules on the OneMax problem. We compare their efficiency for offspring population sizes ranging up to λ=3,200\lambda=3,200 and problem sizes up to n=100,000n=100,000. Our empirical results show that the ranking of the algorithms is very consistent across all tested dimensions, but strongly depends on the population size. While for small values of λ\lambda the 2-rate EA performs best, the multiplicative updates become superior for starting for some threshold value of λ\lambda between 50 and 100. Interestingly, for population sizes around 50, the (1+λ)(1+\lambda)~EA with static mutation rates performs on par with the best of the self-adjusting algorithms. We also consider how the lower bound pminp_{\min} for the mutation rate influences the efficiency of the algorithms. We observe that for the 2-rate EA and the EA with multiplicative update rules the more generous bound pmin=1/n2p_{\min}=1/n^2 gives better results than pmin=1/np_{\min}=1/n when λ\lambda is small. For both algorithms the situation reverses for large~λ\lambda.Comment: To appear at Genetic and Evolutionary Computation Conference (GECCO'19). v2: minor language revisio

    Probability Matching-based Adaptive Strategy Selection vs. Uniform Strategy Selection within Differential Evolution: An Empirical Comparison on the BBOB-2010 Noiseless Testbed

    Get PDF
    International audienceDifferent strategies can be used for the generation of new candidate solutions on the Differential Evolution algorithm. However, the definition of which of them should be applied to the problem at hand is not trivial, besides being a sensitive choice with relation to the algorithm performance. In this paper, we use the BBOB-2010 noiseless benchmarking suite to further empirically validate the Probability Matching-based Adaptive Strategy Selection (PMAdapSS-DE), a method proposed to automatically select the mutation strategy to be applied, based on the relative fitness improvements recently achieved by the application of each of the available strategies on the current optimization process. It is compared with what would be a timeless (naive) choice, the uniform strategy selection within the same sub-set of strategies

    Fitness-AUC Bandit Adaptive Strategy Selection vs. the Probability Matching one within Differential Evolution: An Empirical Comparison on the BBOB-2010 Noiseless Testbed

    Get PDF
    International audienceThe choice of which of the available strategies should be used within the Differential Evolution algorithm for a given problem is not trivial, besides being problem-dependent and very sensitive with relation to the algorithm performance. This decision can be made in an autonomous way, by the use of the Adaptive Strategy Selection paradigm, that continuously selects which strategy should be used for the next offspring generation, based on the performance achieved by each of the available ones on the current optimization process, i.e., while solving the problem. In this paper, we use the BBOB-2010 noiseless benchmarking suite to better empirically validate a comparison-based technique recently proposed to do so, the Fitness-based Area-Under-Curve Bandit, referred to as F-AUC-Bandit. It is compared with another recently proposed approach that uses Probability Matching technique based on the relative fitness improvements, referred to as PM-AdapSS-DE

    Adaptive Operator Mechanism for Genetic Programming

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2013. 8. Robert Ian McKay.their performances are competitive with systems without an adaptive operator mechanism. However they showed some drawbacks, which we discuss. To overcome them, we suggest three variants on operator selection, which performed somewhat better. We have investigated evaluation of operator impact in adaptive operator mechanism, which measures the impact of operator applications on improvement of solution. Hence the impact guides operator rates, evaluation of operator impact is very important in adaptive operator mechanism. There are two issues in evaluation of operator impact: the resource and the method. Basically all history information of run are able to be used as resources for the operator impact, but fitness value which is directly related with the improvement of solution, is usually used as a resource. By using a variety of problems, we used two kinds of resources: accuracy and structure in this thesis. On the other hand, although we used same resources, the evaluated impacts are different by methods. We suggested several methods of the evaluation of operator impact. Although they require only small change, they have a large effect on performance. Finally, we verified adaptive operator mechanism by applying it to a real-world applicationa modeling of algal blooms in the Nakdong River. The objective of this application is a model that describes and predicts the ecosystem of the Nakdong River. We verified it with two researches: fitting the parameters of an expert-derived model for the Nakdong River with a GA, and modeling by extending the expert-derived model with TAG3P.유전 프로그래밍은 모델 학습에 효과적인 진화 연산 알고리즘이다. 유전 프로그래밍은 다양한 파라미터를 가지고 있는데, 이들 파라미터의 값은 대체로 주어진 문제에 맞춰 사용자가 직접 조정한다. 유전 프로그래밍의 성능은 파라미터의 값에 따라 크게 좌우되기 때문에 파라미터 설정에 대한 연구는 진화 연산에서 많은 주목을 받고 있다. 하지만 아직까지 효과적으로 파라미터를 설정하는 방법에 대한 보편적인 지침이 없으며, 많은 실험을 통한 시행착오를 거치면서 적절한 파라미터 값을 찾는 방법이 일반적으로 쓰이고 있다. 본 논문에서 제시하는 적응 연산자 메커니즘은 여러 파라미터 중 유전 연산자의 적용률을 설정해 주는 방법으로, 학습 중간중간의 상황에 맞춰 연산자 적용률을 자동적으로 조정한다. 본 논문에서는, 기존의 적응 연산자 방법을 다양한 유전 연산자를 가진 문법 기반의 유전 프로그래밍인 TAG3P에 적용하고 새로운 적응 연산자 방법을 개발함으로써, 적응 연산자 메커니즘의 적용 범위를 유전 프로그래밍 영역까지 확장하였다. 기존의 적응 연산자 알고리즘을 TAG3P에 적용시키는 연구는 성공적으로 이루어졌으나 몇 가지 문제점을 드러내었다. 이 문제점은 본문에서 후술한다. 이 문제점을 해결하기 위해 유전자 선택에 대한 새로운 변형 알고리즘을 제시하였고, 이는 기존 알고리즘과 비교하여 더 좋은 성능을 보여주었다. 한편으로 유전 연산자가 해의 향상에 미치는 영향을 측정하는 연산자 영향력 평가에 대한 연구도 진행하였다. 적응 연산자 메커니즘에서는 측정된 영향력을 바탕으로 연산자의 적용률을 변화시키기 때문에 영향력 평가는 적응 연산자 메커니즘에서 매우 중요하다. 이 연구에서는 어떤 정보를 이용하여 영향력을 측정할 것인지, 그리고 어떤 방법을 이용하여 영향력을 측정할 것인지의 두 가지 주요 쟁점을 다룬다. 연산자 영향력 평가에는 학습 과정의 모든 정보가 사용될 수 있으며, 대체로 해의 향상과 직접적인 관련이 있는 적합도를 이용한다. 본 논문에서는 다양한 문제를 이용하여 정확도와 구조에 관련된 두 지표를 영향력 평가에 이용해보았다. 한편으로 같은 정보를 이용하더라도 그것을 활용하는 방법에 따라 측정되는 영향력이 달라지는데, 본 논문에서는 작은 변화를 통해서도 큰 성능 변화를 야기시킬 수 있는 영향력 평가 방법을 몇가지 소개한다. 마지막으로 적응 연산자 메커니즘을 실제 문제에 적용함으로써 유용성을 확인하였다. 이를 위해 사용된 실제 문제는 낙동강의 녹조 현상에 대한 예측으로, 낙동강의 생태 시스템을 묘사하고 예측하는 모델을 개발하는 것을 목적으로 한다. 2가지 연구를 통해 유용성을 확인하였다. 우선 전문가에 의해 만들어진 기본 모델을 바탕으로, 유전 알고리즘을 이용하여 모델의 파라미터를 최적화 하였고, 그리고 TAG3P를 이용하여 기본 모델의 확장하고 이를 통해 새로운 모델을 만들어 보았다.Genetic programming (GP) is an effective evolutionary algorithm for many problems, especially suited to model learning. GP has many parameters, usually defined by the user according to the problem. The performance of GP is sensitive to their values. Parameter setting has been a major focus of study in evolutionary computation. However there is still no general guideline for choosing efficient settings. The usual method for parameter setting is trial and error. The method used in this thesis, adaptive operator mechanism, replaces the user's action in setting rates of application of genetic operators. adaptive operator mechanism autonomously controls the genetic operators during a run. This thesis extends adaptive operator mechanism to genetic programming, applying existing adaptive operator algorithms and developing them for TAG3P, a grammar-guided GP which supports a wide variety of useful genetic operators. Existing adaptive operator selection algorithms are successfully applied to TAG3P1 Introduction 1 1.1 Background and Motivation 1 1.2 Our Approach and Its Contributions 2 1.3 Outline 4 2 Related Works 5 2.1 Evolutionary Algorithms 5 2.1.1 Genetic Algorithm 5 2.1.2 Genetic Programming 8 2.1.3 Tree Adjoining Grammar based Genetic Programming 9 3 Adaptive Mechanism and Adaptive Operator Selection 16 3.1 Adaptive Mechanism 16 3.2 Adaptive Operator Selection 18 3.2.1 Operator Selection 18 3.2.2 Evaluation of Operator Impact 19 3.3 Algorithms of Adaptive Operator Selection 20 3.3.1 Probability Matching 21 3.3.2 Adaptive Pursuit 22 3.3.3 Multi-Armed Bandits 25 4 Preliminary Experiment for Adaptive Operator Mechanism 28 4.1 Test Problems 28 4.2 Experimental Design 30 4.2.1 Search Space 31 4.2.2 General Parameter Settings 32 4.3 Results and Discussion 34 5 Operator Selection 39 5.1 Operator Selection Algorithms for GP 39 5.1.1 Powered Probability Matching 39 5.1.2 Adaptive Probability Matching 41 5.1.3 Recursive Adaptive Pursuit 41 5.2 Experiments and Results 43 5.2.1 Test Problems 43 5.2.2 Experimental Design 44 5.2.3 Results and Discussion 46 6 Evaluation of Operator Impact 56 6.1 Rates for the Amount of Individual Usage 57 6.1.1 Denition of Rates for the Amount of Individual Usage 57 6.1.2 Results and Discussion 58 6.2 Ratio for the Improvement of Fitness 63 6.2.1 Pairs and Group 64 6.2.2 Ratio and Children Fitness 65 6.2.3 Experimental Design 65 6.2.4 Result and Discussion 66 6.3 Ranking Point 73 6.3.1 Denition of Ranking Point 73 6.3.2 Experimental Design 74 6.3.3 Result and Discussion 74 6.4 Pre-Search Structure 76 6.4.1 Denition of Pre-Search Structure 76 6.4.2 Preliminary Experiment for Sampling 78 6.4.3 Experimental Design 82 6.4.4 Result and Discussion 83 7 Application: Nakdong River Modeling 85 7.1 Problem Description 85 7.1.1 Outline 85 7.1.2 Data Description 86 7.1.3 Model Description 88 7.1.4 Methods 93 7.2 Results 97 7.2.1 Parameter Optimization 97 7.2.2 Modeling 101 7.3 Summary 103 8 Conclusion 104 8.1 Summary 104 8.2 Future Works 108Docto

    Improving evolutionary algorithms by MEANS of an adaptive parameter control approach

    Get PDF
    Evolutionary algorithms (EA) constitute a class of optimization methods that is widely used to solve complex scientific problems. However, EA often converge prematurely over suboptimal solutions, the evolution process is computational expensive, and setting the required EA parameters is quite difficult. We believe that the best way to address these problems is to begin by improving the parameter setting strategy, which will in turn improve the search path of the optimizer, and, we hope, ultimately help prevent premature convergence and relieve the computational burden. The strategy that will achieve this outcome, and the one we adopt in this research, is to ensure that the parameter setting approach takes into account the search path and attempts to drive it in the most advantageous direction. Our objective is therefore to develop an adaptive parameter setting approach capable of controlling all the EA parameters at once. To interpret the search path, we propose to incorporate the concept of exploration and exploitation into the feedback indicator. The first step is to review and study the available genotypic diversity measurements used to characterize the exploration of the optimizer over the search space. We do this by implementing a specifically designed benchmark, and propose three diversity requirements for evaluating the meaningfulness of those measures as population diversity estimators. Results show that none of the published formulations is, in fact, a qualified diversity descriptor. To remedy this, we introduce a new genotypic formulation here, the performance analysis of which shows that it produces better results overall, notwithstanding some serious defects. We initiate a similar study aimed at describing the role of exploitation in the search process, which is to indicate promising regions. However, since exploitation is mainly driven by the individuals’ fitness, we turn our attention toward phenotypic convergence measures. Again, the in-depth analysis reveals that none of the published phenotypic descriptors is capable of portraying the fitness distribution of a population. Consequently, a new phenotypic formulation is developed here, which shows perfect agreement with the expected population behavior. On the strength of these achievements, we devise an optimizer diagnostic tool based on the new genotypic and phenotypic formulations, and illustrate its value by comparing the impacts of various EA parameters. Although the main purpose of this development is to explore the relevance of using both a genotypic and a phenotypic measure to characterize the search process, our diagnostic tool proves to be one of the few tools available to practitioners for interpreting and customizing the way in which optimizers work over real-world problems. With the knowledge gained in our research, the objective of this thesis is finally met, with the proposal of a new adaptive parameter control approach. The system is based on a Bayesian network that enables all the EA parameters to be considered at once. To the authors’ knowledge, this is the first parameter setting proposal devised to do so. The genotypic and phenotypic measures developed are combined in the form of a credit assignment scheme for rewarding parameters by, among other things, promoting maximization of both exploration and exploitation. The proposed adaptive system is evaluated over a recognized benchmark (CEC’05) through the use of a steady-state genetic algorithm (SSGA), and then compared with seven other approaches, like FAUC-RMAB and G-CMA-ES, which are state-of-the-art adaptive methods. Overall, the results demonstrate statistically that the new proposal not only performs as well as G-CMA-ES, but outperforms almost all the other adaptive systems. Nonetheless, this investigation revealed that none of the methods tested is able to locate global optimum over complex multimodal problems. This led us to conclude that synergy and complementarity among the parameters involved is probably missing. Consequently, more research on these topics is advised, with a view to devising enhanced optimizers. We provide numerous recommendations for such research at the end of this thesis
    corecore