132 research outputs found

    Runtime Analysis for Self-adaptive Mutation Rates

    Full text link
    We propose and analyze a self-adaptive version of the (1,λ)(1,\lambda) evolutionary algorithm in which the current mutation rate is part of the individual and thus also subject to mutation. A rigorous runtime analysis on the OneMax benchmark function reveals that a simple local mutation scheme for the rate leads to an expected optimization time (number of fitness evaluations) of O(nλ/logλ+nlogn)O(n\lambda/\log\lambda+n\log n) when λ\lambda is at least ClnnC \ln n for some constant C>0C > 0. For all values of λClnn\lambda \ge C \ln n, this performance is asymptotically best possible among all λ\lambda-parallel mutation-based unbiased black-box algorithms. Our result shows that self-adaptation in evolutionary computation can find complex optimal parameter settings on the fly. At the same time, it proves that a relatively complicated self-adjusting scheme for the mutation rate proposed by Doerr, Gie{\ss}en, Witt, and Yang~(GECCO~2017) can be replaced by our simple endogenous scheme. On the technical side, the paper contributes new tools for the analysis of two-dimensional drift processes arising in the analysis of dynamic parameter choices in EAs, including bounds on occupation probabilities in processes with non-constant drift

    How Well Does the Metropolis Algorithm Cope With Local Optima?

    Full text link
    The Metropolis algorithm (MA) is a classic stochastic local search heuristic. It avoids getting stuck in local optima by occasionally accepting inferior solutions. To better and in a rigorous manner understand this ability, we conduct a mathematical runtime analysis of the MA on the CLIFF benchmark. Apart from one local optimum, cliff functions are monotonically increasing towards the global optimum. Consequently, to optimize a cliff function, the MA only once needs to accept an inferior solution. Despite seemingly being an ideal benchmark for the MA to profit from its main working principle, our mathematical runtime analysis shows that this hope does not come true. Even with the optimal temperature (the only parameter of the MA), the MA optimizes most cliff functions less efficiently than simple elitist evolutionary algorithms (EAs), which can only leave the local optimum by generating a superior solution possibly far away. This result suggests that our understanding of why the MA is often very successful in practice is not yet complete. Our work also suggests to equip the MA with global mutation operators, an idea supported by our preliminary experiments.Comment: To appear in the proceedings of GECCO 2023. With appendix containing all proofs. 28 page

    On the limitations of the univariate marginal distribution algorithm to deception and where bivariate EDAs might help

    Get PDF
    We introduce a new benchmark problem called Deceptive Leading Blocks (DLB) to rigorously study the runtime of the Univariate Marginal Distribution Algorithm (UMDA) in the presence of epistasis and deception. We show that simple Evolutionary Algorithms (EAs) outperform the UMDA unless the selective pressure μ/λ\mu/\lambda is extremely high, where μ\mu and λ\lambda are the parent and offspring population sizes, respectively. More precisely, we show that the UMDA with a parent population size of μ=Ω(logn)\mu=\Omega(\log n) has an expected runtime of eΩ(μ)e^{\Omega(\mu)} on the DLB problem assuming any selective pressure μλ141000\frac{\mu}{\lambda} \geq \frac{14}{1000}, as opposed to the expected runtime of O(nλlogλ+n3)\mathcal{O}(n\lambda\log \lambda+n^3) for the non-elitist (μ,λ) EA(\mu,\lambda)~\text{EA} with μ/λ1/e\mu/\lambda\leq 1/e. These results illustrate inherent limitations of univariate EDAs against deception and epistasis, which are common characteristics of real-world problems. In contrast, empirical evidence reveals the efficiency of the bi-variate MIMIC algorithm on the DLB problem. Our results suggest that one should consider EDAs with more complex probabilistic models when optimising problems with some degree of epistasis and deception.Comment: To appear in the 15th ACM/SIGEVO Workshop on Foundations of Genetic Algorithms (FOGA XV), Potsdam, German

    Drift Analysis with Fitness Levels for Elitist Evolutionary Algorithms

    Full text link
    The fitness level method is a popular tool for analyzing the computation time of elitist evolutionary algorithms. Its idea is to divide the search space into multiple fitness levels and estimate lower and upper bounds on the computation time using transition probabilities between fitness levels. However, the lower bound generated from this method is often not tight. To improve the lower bound, this paper rigorously studies an open question about the fitness level method: what are the tightest lower and upper time bounds that can be constructed based on fitness levels? To answer this question, drift analysis with fitness levels is developed, and the tightest bound problem is formulated as a constrained multi-objective optimization problem subject to fitness level constraints. The tightest metric bounds from fitness levels are constructed and proven for the first time. Then the metric bounds are converted into linear bounds, where existing linear bounds are special cases. This paper establishes a general framework that can cover various linear bounds from trivial to best coefficients. It is generic and promising, as it can be used not only to draw the same bounds as existing ones, but also to draw tighter bounds, especially on fitness landscapes where shortcuts exist. This is demonstrated in the case study of the (1+1) EA maximizing the TwoPath function

    Runtime analysis of non-elitist populations: from classical optimisation to partial information

    Get PDF
    Although widely applied in optimisation, relatively little has been proven rigorously about the role and behaviour of populations in randomised search processes. This paper presents a new method to prove upper bounds on the expected optimisation time of population-based randomised search heuristics that use non-elitist selection mechanisms and unary variation operators. Our results follow from a detailed drift analysis of the population dynamics in these heuristics. This analysis shows that the optimisation time depends on the relationship between the strength of the selective pressure and the degree of variation introduced by the variation operator. Given limited variation, a surprisingly weak selective pressure suffices to optimise many functions in expected polynomial time. We derive upper bounds on the expected optimisation time of non-elitist Evolutionary Algorithms (EA) using various selection mechanisms, including fitness proportionate selection. We show that EAs using fitness proportionate selection can optimise standard benchmark functions in expected polynomial time given a sufficiently low mutation rate. As a second contribution, we consider an optimisation scenario with partial information, where fitness values of solutions are only partially available. We prove that non-elitist EAs under a set of specific conditions can optimise benchmark functions in expected polynomial time, even when vanishingly little information about the fitness values of individual solutions or populations is available. To our knowledge, this is the first runtime analysis of randomised search heuristics under partial information

    The First Proven Performance Guarantees for the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) on a Combinatorial Optimization Problem

    Full text link
    The Non-dominated Sorting Genetic Algorithm-II (NSGA-II) is one of the most prominent algorithms to solve multi-objective optimization problems. Recently, the first mathematical runtime guarantees have been obtained for this algorithm, however only for synthetic benchmark problems. In this work, we give the first proven performance guarantees for a classic optimization problem, the NP-complete bi-objective minimum spanning tree problem. More specifically, we show that the NSGA-II with population size N4((n1)wmax+1)N \ge 4((n-1) w_{\max} + 1) computes all extremal points of the Pareto front in an expected number of O(m2nwmaxlog(nwmax))O(m^2 n w_{\max} \log(n w_{\max})) iterations, where nn is the number of vertices, mm the number of edges, and wmaxw_{\max} is the maximum edge weight in the problem instance. This result confirms, via mathematical means, the good performance of the NSGA-II observed empirically. It also shows that mathematical analyses of this algorithm are not only possible for synthetic benchmark problems, but also for more complex combinatorial optimization problems. As a side result, we also obtain a new analysis of the performance of the global SEMO algorithm on the bi-objective minimum spanning tree problem, which improves the previous best result by a factor of F|F|, the number of extremal points of the Pareto front, a set that can be as large as nwmaxn w_{\max}. The main reason for this improvement is our observation that both multi-objective evolutionary algorithms find the different extremal points in parallel rather than sequentially, as assumed in the previous proofs.Comment: Author-generated version of a paper appearing in the proceedings of IJCAI 202

    Self-adaptation Can Help Evolutionary Algorithms Track Dynamic Optima

    Get PDF
    corecore