6 research outputs found

    Upper Bounds on the Runtime of the Univariate Marginal Distribution Algorithm on OneMax

    Full text link
    A runtime analysis of the Univariate Marginal Distribution Algorithm (UMDA) is presented on the OneMax function for wide ranges of its parameters μ\mu and λ\lambda. If μclogn\mu\ge c\log n for some constant c>0c>0 and λ=(1+Θ(1))μ\lambda=(1+\Theta(1))\mu, a general bound O(μn)O(\mu n) on the expected runtime is obtained. This bound crucially assumes that all marginal probabilities of the algorithm are confined to the interval [1/n,11/n][1/n,1-1/n]. If μcnlogn\mu\ge c' \sqrt{n}\log n for a constant c>0c'>0 and λ=(1+Θ(1))μ\lambda=(1+\Theta(1))\mu, the behavior of the algorithm changes and the bound on the expected runtime becomes O(μn)O(\mu\sqrt{n}), which typically even holds if the borders on the marginal probabilities are omitted. The results supplement the recently derived lower bound Ω(μn+nlogn)\Omega(\mu\sqrt{n}+n\log n) by Krejca and Witt (FOGA 2017) and turn out as tight for the two very different values μ=clogn\mu=c\log n and μ=cnlogn\mu=c'\sqrt{n}\log n. They also improve the previously best known upper bound O(nlognloglogn)O(n\log n\log\log n) by Dang and Lehre (GECCO 2015).Comment: Version 4: added illustrations and experiments; improved presentation in Section 2.2; to appear in Algorithmica; the final publication is available at Springer via http://dx.doi.org/10.1007/s00453-018-0463-

    Analysis of Evolutionary Algorithms in Dynamic and Stochastic Environments

    Full text link
    Many real-world optimization problems occur in environments that change dynamically or involve stochastic components. Evolutionary algorithms and other bio-inspired algorithms have been widely applied to dynamic and stochastic problems. This survey gives an overview of major theoretical developments in the area of runtime analysis for these problems. We review recent theoretical studies of evolutionary algorithms and ant colony optimization for problems where the objective functions or the constraints change over time. Furthermore, we consider stochastic problems under various noise models and point out some directions for future research.Comment: This book chapter is to appear in the book "Theory of Randomized Search Heuristics in Discrete Search Spaces", which is edited by Benjamin Doerr and Frank Neumann and is scheduled to be published by Springer in 201

    Noisy combinatorial optimisation with evolutionary algorithms

    Get PDF
    The determination of the efficient evolutionary optimisation approaches in solving noisy combinatorial problems is the main focus in this research. Initially, we present an empirical study of a range of evolutionary algorithms applied to various noisy combinatorial optimisation problems. There are four sets of experiments. The first looks at several toy problems, such as OneMax and other linear problems. We find that Univariate Marginal Distribution Algorithm (UMDA) and the Paired-Crossover Evolutionary Algorithm (PCEA) are the only ones able to cope robustly with noise, within a reasonable fixed time budget. In the second stage, UMDA and PCEA are then tested on more complex noisy problems: SubsetSum, Knapsack and SetCover. Both perform well under increasing levels of noise, with UMDA being the better of the two. In the third stage, we consider two noisy multi-objective problems (CountingOnesCountingZeros and a multi-objective formulation of SetCover). We compare several adaptations of UMDA for multi-objective problems with the Simple Evolutionary Multi-objective Optimiser (SEMO) and NSGA-II. In the last stage of empirical analysis, a realistic problem of the path planning for the ground surveillance with Unmanned Aerial Vehicles is considered. We conclude that UMDA, and its variants, can be highly effective on a variety of noisy combinatorial optimisation, outperforming many other evolutionary algorithms. Next, we study the use of voting mechanisms in populations, and introduce a new Voting algorithm which can solve OneMax and Jump in O(n log n), even for gaps as large as O(n). More significantly, the algorithm solves OneMax with added posterior noise in O(n log n), when the variance of the noise distribution is sigma2^2 = O(n) and in O(sigma2^2 log n) when the noise variance is greater than this. We assume only that the noise distribution has finite mean and variance and (for the larger noise case) that it is unimodal. Building upon this promising performance, we consider other noise models prevalent in optimisation and learning and show that the Voting algorithm has efficient performance in solving OneMax in presence of these noise variants. We also examine the performance on arbitrary linear and monotonic functions. The Voting algorithm fails on LeadingOnes but we give a variant which can solve the problem in O(n log n). We empirically study the use of voting in population based algorithms (UMDA, PCEA and cGA) and show that this can be effective for large population sizes
    corecore