50 research outputs found

    Unbiased Black-Box Complexities of Jump Functions

    Full text link
    We analyze the unbiased black-box complexity of jump functions with small, medium, and large sizes of the fitness plateau surrounding the optimal solution. Among other results, we show that when the jump size is (1/2ε)n(1/2 - \varepsilon)n, that is, only a small constant fraction of the fitness values is visible, then the unbiased black-box complexities for arities 33 and higher are of the same order as those for the simple \textsc{OneMax} function. Even for the extreme jump function, in which all but the two fitness values n/2n/2 and nn are blanked out, polynomial-time mutation-based (i.e., unary unbiased) black-box optimization algorithms exist. This is quite surprising given that for the extreme jump function almost the whole search space (all but a Θ(n1/2)\Theta(n^{-1/2}) fraction) is a plateau of constant fitness. To prove these results, we introduce new tools for the analysis of unbiased black-box complexities, for example, selecting the new parent individual not by comparing the fitnesses of the competing search points, but also by taking into account the (empirical) expected fitnesses of their offspring.Comment: This paper is based on results presented in the conference versions [GECCO 2011] and [GECCO 2014

    Robustness of Populations in Stochastic Environments

    Get PDF

    Bounding Bloat in Genetic Programming

    Full text link
    While many optimization problems work with a fixed number of decision variables and thus a fixed-length representation of possible solutions, genetic programming (GP) works on variable-length representations. A naturally occurring problem is that of bloat (unnecessary growth of solutions) slowing down optimization. Theoretical analyses could so far not bound bloat and required explicit assumptions on the magnitude of bloat. In this paper we analyze bloat in mutation-based genetic programming for the two test functions ORDER and MAJORITY. We overcome previous assumptions on the magnitude of bloat and give matching or close-to-matching upper and lower bounds for the expected optimization time. In particular, we show that the (1+1) GP takes (i) Θ(Tinit+nlogn)\Theta(T_{init} + n \log n) iterations with bloat control on ORDER as well as MAJORITY; and (ii) O(TinitlogTinit+n(logn)3)O(T_{init} \log T_{init} + n (\log n)^3) and Ω(Tinit+nlogn)\Omega(T_{init} + n \log n) (and Ω(TinitlogTinit)\Omega(T_{init} \log T_{init}) for n=1n=1) iterations without bloat control on MAJORITY.Comment: An extended abstract has been published at GECCO 201

    Intuitive Analyses via Drift Theory

    Full text link
    Humans are bad with probabilities, and the analysis of randomized algorithms offers many pitfalls for the human mind. Drift theory is an intuitive tool for reasoning about random processes. It allows turning expected stepwise changes into expected first-hitting times. While drift theory is used extensively by the community studying randomized search heuristics, it has seen hardly any applications outside of this field, in spite of many research questions which can be formulated as first-hitting times. We state the most useful drift theorems and demonstrate their use for various randomized processes, including approximating vertex cover, the coupon collector process, a random sorting algorithm, and the Moran process. Finally, we consider processes without expected stepwise change and give a lemma based on drift theory applicable in such scenarios without drift. We use this tool for the analysis of the gambler's ruin process, for a coloring algorithm, for an algorithm for 2-SAT, and for a version of the Moran process without bias

    Theoretical Study of Optimizing Rugged Landscapes with the cGA

    Full text link
    Estimation of distribution algorithms (EDAs) provide a distribution - based approach for optimization which adapts its probability distribution during the run of the algorithm. We contribute to the theoretical understanding of EDAs and point out that their distribution approach makes them more suitable to deal with rugged fitness landscapes than classical local search algorithms. Concretely, we make the OneMax function rugged by adding noise to each fitness value. The cGA can nevertheless find solutions with n(1 - \epsilon) many 1s, even for high variance of noise. In contrast to this, RLS and the (1+1) EA, with high probability, only find solutions with n(1/2+o(1)) many 1s, even for noise with small variance.Comment: 17 pages, 1 figure, PPSN 202

    Run Time Bounds for Integer-Valued OneMax Functions

    Full text link
    While most theoretical run time analyses of discrete randomized search heuristics focused on finite search spaces, we consider the search space Zn\mathbb{Z}^n. This is a further generalization of the search space of multi-valued decision variables {0,,r1}n\{0,\ldots,r-1\}^n. We consider as fitness functions the distance to the (unique) non-zero optimum aa (based on the L1L_1-metric) and the \ooea which mutates by applying a step-operator on each component that is determined to be varied. For changing by ±1\pm 1, we show that the expected optimization time is Θ(n(a+log(aH)))\Theta(n \cdot (|a|_{\infty} + \log(|a|_H))). In particular, the time is linear in the maximum value of the optimum aa. Employing a different step operator which chooses a step size from a distribution so heavy-tailed that the expectation is infinite, we get an optimization time of O(nlog2(a1)(log(log(a1)))1+ϵ)O(n \cdot \log^2 (|a|_1) \cdot \left(\log (\log (|a|_1))\right)^{1 + \epsilon}). Furthermore, we show that RLS with step size adaptation achieves an optimization time of Θ(nlog(a1))\Theta(n \cdot \log(|a|_1)). We conclude with an empirical analysis, comparing the above algorithms also with a variant of CMA-ES for discrete search spaces
    corecore