2,523 research outputs found

    Bounding Bloat in Genetic Programming

    Full text link
    While many optimization problems work with a fixed number of decision variables and thus a fixed-length representation of possible solutions, genetic programming (GP) works on variable-length representations. A naturally occurring problem is that of bloat (unnecessary growth of solutions) slowing down optimization. Theoretical analyses could so far not bound bloat and required explicit assumptions on the magnitude of bloat. In this paper we analyze bloat in mutation-based genetic programming for the two test functions ORDER and MAJORITY. We overcome previous assumptions on the magnitude of bloat and give matching or close-to-matching upper and lower bounds for the expected optimization time. In particular, we show that the (1+1) GP takes (i) Θ(Tinit+nlogn)\Theta(T_{init} + n \log n) iterations with bloat control on ORDER as well as MAJORITY; and (ii) O(TinitlogTinit+n(logn)3)O(T_{init} \log T_{init} + n (\log n)^3) and Ω(Tinit+nlogn)\Omega(T_{init} + n \log n) (and Ω(TinitlogTinit)\Omega(T_{init} \log T_{init}) for n=1n=1) iterations without bloat control on MAJORITY.Comment: An extended abstract has been published at GECCO 201

    Exponential inequalities for unbounded functions of geometrically ergodic Markov chains. Applications to quantitative error bounds for regenerative Metropolis algorithms

    Full text link
    The aim of this note is to investigate the concentration properties of unbounded functions of geometrically ergodic Markov chains. We derive concentration properties of centered functions with respect to the square of the Lyapunov's function in the drift condition satisfied by the Markov chain. We apply the new exponential inequalities to derive confidence intervals for MCMC algorithms. Quantitative error bounds are providing for the regenerative Metropolis algorithm of [5]

    Intuitive Analyses via Drift Theory

    Full text link
    Humans are bad with probabilities, and the analysis of randomized algorithms offers many pitfalls for the human mind. Drift theory is an intuitive tool for reasoning about random processes. It allows turning expected stepwise changes into expected first-hitting times. While drift theory is used extensively by the community studying randomized search heuristics, it has seen hardly any applications outside of this field, in spite of many research questions which can be formulated as first-hitting times. We state the most useful drift theorems and demonstrate their use for various randomized processes, including approximating vertex cover, the coupon collector process, a random sorting algorithm, and the Moran process. Finally, we consider processes without expected stepwise change and give a lemma based on drift theory applicable in such scenarios without drift. We use this tool for the analysis of the gambler's ruin process, for a coloring algorithm, for an algorithm for 2-SAT, and for a version of the Moran process without bias

    First-Hitting Times Under Additive Drift

    Full text link
    For the last ten years, almost every theoretical result concerning the expected run time of a randomized search heuristic used drift theory, making it the arguably most important tool in this domain. Its success is due to its ease of use and its powerful result: drift theory allows the user to derive bounds on the expected first-hitting time of a random process by bounding expected local changes of the process -- the drift. This is usually far easier than bounding the expected first-hitting time directly. Due to the widespread use of drift theory, it is of utmost importance to have the best drift theorems possible. We improve the fundamental additive, multiplicative, and variable drift theorems by stating them in a form as general as possible and providing examples of why the restrictions we keep are still necessary. Our additive drift theorem for upper bounds only requires the process to be nonnegative, that is, we remove unnecessary restrictions like a finite, discrete, or bounded search space. As corollaries, the same is true for our upper bounds in the case of variable and multiplicative drift
    corecore