13 research outputs found

    Offspring Population Size Matters when Comparing Evolutionary Algorithms with Self-Adjusting Mutation Rates

    Full text link
    We analyze the performance of the 2-rate (1+λ)(1+\lambda) Evolutionary Algorithm (EA) with self-adjusting mutation rate control, its 3-rate counterpart, and a (1+λ)(1+\lambda)~EA variant using multiplicative update rules on the OneMax problem. We compare their efficiency for offspring population sizes ranging up to λ=3,200\lambda=3,200 and problem sizes up to n=100,000n=100,000. Our empirical results show that the ranking of the algorithms is very consistent across all tested dimensions, but strongly depends on the population size. While for small values of λ\lambda the 2-rate EA performs best, the multiplicative updates become superior for starting for some threshold value of λ\lambda between 50 and 100. Interestingly, for population sizes around 50, the (1+λ)(1+\lambda)~EA with static mutation rates performs on par with the best of the self-adjusting algorithms. We also consider how the lower bound pminp_{\min} for the mutation rate influences the efficiency of the algorithms. We observe that for the 2-rate EA and the EA with multiplicative update rules the more generous bound pmin=1/n2p_{\min}=1/n^2 gives better results than pmin=1/np_{\min}=1/n when λ\lambda is small. For both algorithms the situation reverses for large~λ\lambda.Comment: To appear at Genetic and Evolutionary Computation Conference (GECCO'19). v2: minor language revisio

    ADAPTIVE SELECTION OF AUXILIARY OBJECTIVES IN MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS

    Get PDF
    Subject of Research.We propose to modify the EA+RL method, which increases efficiency of evolutionary algorithms by means of auxiliary objectives. The proposed modification is compared to the existing objective selection methods on the example of travelling salesman problem. Method. In the EA+RL method a reinforcement learning algorithm is used to select an objective – the target objective or one of the auxiliary objectives – at each iteration of the single-objective evolutionary algorithm.The proposed modification of the EA+RL method adopts this approach for the usage with a multiobjective evolutionary algorithm. As opposed to theEA+RL method, in this modification one of the auxiliary objectives is selected by reinforcement learning and optimized together with the target objective at each step of the multiobjective evolutionary algorithm. Main Results.The proposed modification of the EA+RL method was compared to the existing objective selection methods on the example of travelling salesman problem. In the EA+RL method and its proposed modification reinforcement learning algorithms for stationary and non-stationary environment were used. The proposed modification of the EA+RL method applied with reinforcement learning for non-stationary environment outperformed the considered objective selection algorithms on the most problem instances. Practical Significance. The proposed approach increases efficiency of evolutionary algorithms, which may be used for solving discrete NP-hard optimization problems. They are, in particular, combinatorial path search problems and scheduling problems

    Runtime analysis of a population-based evolutionary algorithm with auxiliary objectives selected by reinforcement learning

    No full text
    We propose the method of selection of auxiliary objectives (2 + 2λ)-EA+RL which is the population-based modification of the EA+RL method. We analyse the efficiency of this method on the problem XdivK that is considered to be a hard problem for random search heuristics due to multiple plateaus. We prove that in the case of presence of a helping auxiliary objective this method can find the optimum in 0(n2) fitness evaluations in expectation, while the initial EA+RL, which is not population-based, yields at least Ω (nk−1) fitness evaluations, where k is the plateau size. We also prove that in the case of presence of an obstructive auxiliary objective the expected runtime increases only by a constant factor

    Fast Re-Optimization of LeadingOnes with Frequent Changes

    No full text
    International audienceIn real-world optimization scenarios, the problem instance that we are asked to solve may change during the optimization process, e.g., when new information becomes available or when the environmental conditions change. In such situations, one could hope to achieve reasonable performance by continuing the search from the best solution found for the original problem. Likewise, one may hope that when solving several problem instances that are similar to each other, it can be beneficial to "warm-start" the optimization process of the second instance by the best solution found for the first. However, it was shown in [Doerr et al., GECCO 2019] that even when initialized with structurally good solutions, evolutionary algorithms can have a tendency to replace these good solutions by structurally worse ones, resulting in optimization times that have no advantage over the same algorithms started from scratch. Doerr et al. also proposed a diversity mechanism to overcome this problem. Their approach balances greedy search around a best-so-far solution for the current problem with search in the neighborhood around the best-found solution for the previous instance. In this work, we first show that the re-optimization approach suggested by Doerr et al. reaches a limit when the problem instances are prone to more frequent changes. More precisely, we show that they get stuck on the dynamic LeadingOnes problem in which the target string changes periodically. We then propose a modification of their algorithm which interpolates between greedy search around the previous-best and the current-best solution. We empirically evaluate our smoothed re-optimization algorithm on LeadingOnes instances with various frequencies of change and with different perturbation factors and show that it outperforms both a fully restarted (1+1) Evolutionary Algorithm and the re-optimization approach by Doerr et al

    Generation of Tests for Programming Challenge Tasks Using Helper-Objectives

    No full text
    Abstract. Generation of performance tests for programming challenge tasks is considered. A number of evolutionary approaches are compared on two different solutions of an example problem. It is shown that using helper-objectives enhances evolutionary algorithms in the considered case. The general approach involves automated selection of such objectives

    Efficient Computation of Fitness Function for Evolutionary Clustering

    Get PDF
    Evolutionary algorithms (EAs) are random search heuristics which can solve various optimization problems. There are plenty of papers describing different approaches developed to apply evolutionary algorithms to the clustering problem, although none of them addressed the problem of fitness function computation. In clustering, many clustering validity indices exist that are designed to evaluate quality of resulting points partition. It is hard to use them as a fitness function due to their computational complexity. In this paper, we propose an efficient method for iterative computation of clustering validity indices which makes application of the EAs to this problem much more appropriate than it was before

    Blending Dynamic Programming with Monte Carlo Simulation for Bounding the Running Time of Evolutionary Algorithms

    No full text
    With the goal to provide absolute lower bounds for the best possible running times that can be achieved by (1+λ)(1+\lambda)-type search heuristics on common benchmark problems, we recently suggested a dynamic programming approach that computes optimal expected running times and the regret values inferred when deviating from the optimal parameter choice. Our previous work is restricted to problems for which transition probabilities between different states can be expressed by relatively simple mathematical expressions. With the goal to cover broader sets of problems, we suggest in this work an extension of the dynamic programming approach to settings in which the transition probabilities cannot necessarily be computed exactly, but in which they can be approximated numerically, up to arbitrary precision, by Monte Carlo sampling. We apply our hybrid Monte Carlo dynamic programming approach to a concatenated jump function and demonstrate how the obtained bounds can be used to gain a deeper understanding into parameter control schemes.Comment: 8 pages, 4 figures. Submitted to IEEE Congress on Evolutionary Computation 202

    Illustrating the trade-off between time, quality, and success probability in heuristic search

    No full text
    International audienceBenchmarking aims to investigate the performance of one or several algorithms for a set of reference problems by empirical means. An important motivation for benchmarking is the generation of insight that can be leveraged for designing more efficient solvers, for selecting a best algorithm, and/or for choosing a suitable instantiation of a parametrized algorithm. An important component of benchmarking is its design of experiment (DoE), which comprises the selection of the problems, the algorithms, the computational budget, etc., but also the performance indicators by which the data is evaluated. The DoE very strongly depends on the question that the user aims to answer. Flexible benchmarking environments that can easily adopt to users' needs are therefore in high demand.With the objective to provide such a flexible benchmarking environment, the recently released IOHprofiler not only allows the user to choose the sets of benchmark problems and reference algorithms, but provides in addition a highly interactive, versatile performance evaluation. However, it still lacks a few important performance indicators that are relevant to practitioners and/or theoreticians. In this discussion paper we focus on one particular aspect, the probability that a considered algorithm reaches a certain target value within a given time budget. We thereby suggest to extend the classically regarded fixed-target and fixed-budget analyses by a fixed-probability measure. Fixed-probability curves are estimated using Pareto layers of (target, budget) pairs that can be realized by the algorithm with the required certainty. We also provide a first implementation towards a fixed-probability module within the IOHprofiler environment
    corecore