484 research outputs found

    Upper Bounds on the Runtime of the Univariate Marginal Distribution Algorithm on OneMax

    Full text link
    A runtime analysis of the Univariate Marginal Distribution Algorithm (UMDA) is presented on the OneMax function for wide ranges of its parameters μ\mu and λ\lambda. If μclogn\mu\ge c\log n for some constant c>0c>0 and λ=(1+Θ(1))μ\lambda=(1+\Theta(1))\mu, a general bound O(μn)O(\mu n) on the expected runtime is obtained. This bound crucially assumes that all marginal probabilities of the algorithm are confined to the interval [1/n,11/n][1/n,1-1/n]. If μcnlogn\mu\ge c' \sqrt{n}\log n for a constant c>0c'>0 and λ=(1+Θ(1))μ\lambda=(1+\Theta(1))\mu, the behavior of the algorithm changes and the bound on the expected runtime becomes O(μn)O(\mu\sqrt{n}), which typically even holds if the borders on the marginal probabilities are omitted. The results supplement the recently derived lower bound Ω(μn+nlogn)\Omega(\mu\sqrt{n}+n\log n) by Krejca and Witt (FOGA 2017) and turn out as tight for the two very different values μ=clogn\mu=c\log n and μ=cnlogn\mu=c'\sqrt{n}\log n. They also improve the previously best known upper bound O(nlognloglogn)O(n\log n\log\log n) by Dang and Lehre (GECCO 2015).Comment: Version 4: added illustrations and experiments; improved presentation in Section 2.2; to appear in Algorithmica; the final publication is available at Springer via http://dx.doi.org/10.1007/s00453-018-0463-

    On the Runtime of Randomized Local Search and Simple Evolutionary Algorithms for Dynamic Makespan Scheduling

    Get PDF
    Evolutionary algorithms have been frequently used for dynamic optimization problems. With this paper, we contribute to the theoretical understanding of this research area. We present the first computational complexity analysis of evolutionary algorithms for a dynamic variant of a classical combinatorial optimization problem, namely makespan scheduling. We study the model of a strong adversary which is allowed to change one job at regular intervals. Furthermore, we investigate the setting of random changes. Our results show that randomized local search and a simple evolutionary algorithm are very effective in dynamically tracking changes made to the problem instance.Comment: Conference version appears at IJCAI 201

    Runtime Analysis for Self-adaptive Mutation Rates

    Full text link
    We propose and analyze a self-adaptive version of the (1,λ)(1,\lambda) evolutionary algorithm in which the current mutation rate is part of the individual and thus also subject to mutation. A rigorous runtime analysis on the OneMax benchmark function reveals that a simple local mutation scheme for the rate leads to an expected optimization time (number of fitness evaluations) of O(nλ/logλ+nlogn)O(n\lambda/\log\lambda+n\log n) when λ\lambda is at least ClnnC \ln n for some constant C>0C > 0. For all values of λClnn\lambda \ge C \ln n, this performance is asymptotically best possible among all λ\lambda-parallel mutation-based unbiased black-box algorithms. Our result shows that self-adaptation in evolutionary computation can find complex optimal parameter settings on the fly. At the same time, it proves that a relatively complicated self-adjusting scheme for the mutation rate proposed by Doerr, Gie{\ss}en, Witt, and Yang~(GECCO~2017) can be replaced by our simple endogenous scheme. On the technical side, the paper contributes new tools for the analysis of two-dimensional drift processes arising in the analysis of dynamic parameter choices in EAs, including bounds on occupation probabilities in processes with non-constant drift

    Theory of Randomized Search Heuristics in Combinatorial Optimization

    Get PDF

    Optimizing Linear Functions with Randomized Search Heuristics - The Robustness of Mutation

    Get PDF
    The analysis of randomized search heuristics on classes of functions is fundamental for the understanding of the underlying stochastic process and the development of suitable proof techniques. Recently, remarkable progress has been made in bounding the expected optimization time of the simple (1+1) EA on the class of linear functions. We improve the best known bound in this setting from (1.39+o(1))(en ln n) to (en ln n)+O(n) in expectation and with high probability, which is tight up to lower-order terms. Moreover, upper and lower bounds for arbitrary mutations probabilities p are derived, which imply expected polynomial optimization time as long as p=O((ln n)/n) and which are tight if p=c/n for a constant c. As a consequence, the standard mutation probability p=1/n is optimal for all linear functions, and the (1+1) EA is found to be an optimal mutation-based algorithm. Furthermore, the algorithm turns out to be surprisingly robust since large neighborhood explored by the mutation operator does not disrupt the search

    Tight Bounds on the Optimization Time of a Randomized Search Heuristic on Linear Functions

    Get PDF
    The analysis of randomized search heuristics on classes of functions is fundamental to the understanding of the underlying stochastic process and the development of suitable proof techniques. Recently, remarkable progress has been made in bounding the expected optimization time of a simple evolutionary algorithm, called (1+1) EA, on the class of linear functions. We improve the previously best known bound in this setting from (1.39+o(1))en ln n to en ln n+O(n) in expectation and with high probability, which is tight up to lower-order terms. Moreover, upper and lower bounds for arbitrary mutation probabilities p are derived, which imply expected polynomial optimization time as long as p = O((ln n)/n) and p = Ω(n−C) for a constant C &gt; 0, and which are tight if p = c/n for a constant c &gt; 0. As a consequence, the standard mutation probability p = 1/n is optimal for all linear functions, and the (1+1) EA is found to be an optimal mutation-based algorithm. Furthermore, the algorithm turns out to be surprisingly robust since the large neighbourhood explored by the mutation operator does not disrupt the search.</jats:p

    Guest Editorial: Theory of Evolutionary Computation

    Get PDF
    corecore