10 research outputs found

    Better Runtime Guarantees Via Stochastic Domination

    Full text link
    Apart from few exceptions, the mathematical runtime analysis of evolutionary algorithms is mostly concerned with expected runtimes. In this work, we argue that stochastic domination is a notion that should be used more frequently in this area. Stochastic domination allows to formulate much more informative performance guarantees, it allows to decouple the algorithm analysis into the true algorithmic part of detecting a domination statement and the probability-theoretical part of deriving the desired probabilistic guarantees from this statement, and it helps finding simpler and more natural proofs. As particular results, we prove a fitness level theorem which shows that the runtime is dominated by a sum of independent geometric random variables, we prove the first tail bounds for several classic runtime problems, and we give a short and natural proof for Witt's result that the runtime of any (μ,p)(\mu,p) mutation-based algorithm on any function with unique optimum is subdominated by the runtime of a variant of the \oea on the \onemax function. As side-products, we determine the fastest unbiased (1+1) algorithm for the \leadingones benchmark problem, both in the general case and when restricted to static mutation operators, and we prove a Chernoff-type tail bound for sums of independent coupon collector distributions.Comment: Significantly extended version of a paper that appeared in the proceedings of EvoCOP 201

    Runtime Analysis for Self-adaptive Mutation Rates

    Full text link
    We propose and analyze a self-adaptive version of the (1,λ)(1,\lambda) evolutionary algorithm in which the current mutation rate is part of the individual and thus also subject to mutation. A rigorous runtime analysis on the OneMax benchmark function reveals that a simple local mutation scheme for the rate leads to an expected optimization time (number of fitness evaluations) of O(nλ/logλ+nlogn)O(n\lambda/\log\lambda+n\log n) when λ\lambda is at least ClnnC \ln n for some constant C>0C > 0. For all values of λClnn\lambda \ge C \ln n, this performance is asymptotically best possible among all λ\lambda-parallel mutation-based unbiased black-box algorithms. Our result shows that self-adaptation in evolutionary computation can find complex optimal parameter settings on the fly. At the same time, it proves that a relatively complicated self-adjusting scheme for the mutation rate proposed by Doerr, Gie{\ss}en, Witt, and Yang~(GECCO~2017) can be replaced by our simple endogenous scheme. On the technical side, the paper contributes new tools for the analysis of two-dimensional drift processes arising in the analysis of dynamic parameter choices in EAs, including bounds on occupation probabilities in processes with non-constant drift

    Significance-based Estimation-of-Distribution Algorithms

    Full text link
    Estimation-of-distribution algorithms (EDAs) are randomized search heuristics that maintain a probabilistic model of the solution space. This model is updated from iteration to iteration, based on the quality of the solutions sampled according to the model. As previous works show, this short-term perspective can lead to erratic updates of the model, in particular, to bit-frequencies approaching a random boundary value. Such frequencies take long to be moved back to the middle range, leading to significant performance losses. In order to overcome this problem, we propose a new EDA based on the classic compact genetic algorithm (cGA) that takes into account a longer history of samples and updates its model only with respect to information which it classifies as statistically significant. We prove that this significance-based compact genetic algorithm (sig-cGA) optimizes the commonly regarded benchmark functions OneMax, LeadingOnes, and BinVal all in O(nlogn)O(n\log n) time, a result shown for no other EDA or evolutionary algorithm so far. For the recently proposed scGA -- an EDA that tries to prevent erratic model updates by imposing a bias to the uniformly distributed model -- we prove that it optimizes OneMax only in a time exponential in the hypothetical population size 1/ρ1/\rho. Similarly, we show that the convex search algorithm cannot optimize OneMax in polynomial time

    Simple hyper-heuristics control the neighbourhood size of randomised local search optimally for LeadingOnes

    Get PDF
    Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this paper we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the `simple' Random Gradient HH so success can be measured over a fixed period of time τ, instead of a single iteration. For LeadingOnes we prove that the Generalised Random Gradient (GRG) HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to k low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the k heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to n = 108) and shed some light on the best choices for the parameter τ in various situations

    On the Runtime Analysis of Selection Hyper-heuristics for Pseudo-Boolean Optimisation

    Get PDF
    Rather than manually deciding on a suitable algorithm configuration for a given optimisation problem, hyper-heuristics are high-level search algorithms which evolve the heuristic to be applied. While there are numerous reported successful applications of hyper-heuristics to combinatorial optimisation problems, it is not yet fully understood how well they perform and on which problem classes they are effective. Selection hyper-heuristics (SHHs) employ smart methodologies to select from a pre-defined set of low-level heuristics which to apply in the next decision step. This thesis extends and improves upon the existing foundational understanding of the behaviour and performance of SHHs, providing insights into how and when they can be successfully applied by analysing the time complexity of SHHs on a variety of unimodal and multimodal problem classes. Through a rigorous theoretical analysis, we show that while four commonly applied simple SHHs from the literature do not learn to select the most promising low-level heuristics, generalising them such that application of the chosen heuristic occurs over a longer period of time allows for vastly improved performance. Furthermore, we prove that extending the size of the set of low-level heuristics can improve the performance of the generalised SHHs, outperforming SHHs with smaller sets of low-level heuristics. We show that allowing the SHH to automatically adapt the length of the learning period may further improve the performance and outperform non-adaptive variants. SHHs selecting between two move-acceptance operators are also analysed on two classes of multimodal benchmark functions. An analysis of the performance of simple SHHs on these functions provides insights into the effectiveness of the presented methodologies for escaping from local optima
    corecore