45 research outputs found

    General lower bounds for randomized direct search with isotropic sampling

    Get PDF
    The focus is on a certain class of randomized direct-search methods for optimization in (high-dimensional) Euclidean space, namely for minimization of a function f: R^n -> R, where f is given by an oracle, i. e. a black box for f-evaluations. The iterative methods under consideration generate a sequence of candidate solutions, where potential candidate solutions are generated by adding an isotropically distributed vector to the current candidate solution (possibly several times, to then choose one of these samples to become the next in the sequence of candidate solutions). This class of randomized direct-search methods covers in particular several evolutionary algorithms. Lower bounds on the number of samples (i. e. queries to the f-oracle) are proved which are necessary to enable such a method to reduce the approximation error in the search space. The lower bounds to be presented do not only hold in expectation, but they are such that runtimes below these bounds are observed only with an exponentially small probability (in the search space dimension n). To derive such strong bounds, an appealingly simple, but nevertheless powerful method is applied: We think of the guided/directed random search as a selected fragment of a purely/obliviously random search. Interestingly, the lower bounds so obtained turn out to be tight (up to an absolute constant)

    Analysis of a Simple Evolutionary Algorithm for the Minimization in Euclidian Spaces

    Get PDF
    Although evolutionary algorithms (EAs) are widely used in practical optimization, their theoretical analysis is still in its infancy. Up to now results on expected runtimes and success probabilities are limited to discrete search spaces. In practice, however, EAs are mostly used for continuous optimization problems. First results on the expected runtime of a simple, but fundamental EA minimizing a symmetric polynomial of degree two in Rn are presented. Namely, the so-called (1+1) evolution strategy ((1+1) ES) minimizing the SPHERE function is investigated. A lower bound on the expected runtime is shown that is valid for any

    Probabilistic analysis of evolution strategies using isotropic mutations

    Get PDF
    This dissertation deals with optimization in high-dimensional Euclidean space. Namely, a particular type of direct-search methods known as Evolution Strategies (ESs) are investigated. Evolution Strategies mimic natural evolution, in particular mutation, in order to "evolve" an approximate solution. As this dissertation focuses on theoretical investigation of ESs in the way randomized approximation algorithms are analyzed in theoretical computer science (rather than by means of convergence theory or dynamical-system theory), very basic and simple ESs are considered. Namely, the only search operator that is applied are so-called isotropic mutations. That is, a new candidate solution is obtained by adding a random vector to the current candidate solution the distribution of which is spherically symmetric. General lower bounds on the number of steps/isotropic mutations which are necessary to reduce the approximation error in the search space are proved, where the focus is on how the number of optimization steps depends on (and scales with) the dimensionality of the search space. These lower bounds hold independently of the function to be optimized and for large classes of ESs. Moreover, for several concrete optimization scenarios where certain ESs optimize a unimodal function, upper bounds on the number of optimization steps are proved

    When the plus strategy performs better than the comma strategy - and when not

    Get PDF
    Occasionally there have been long debates on whether to use elitist selection or not. In the present paper the simple (1,lambd) EA and (1+lambda) EA operating on {0,1}^n are compared by means of a rigorous runtime analysis. It turns out that only values for lambda that are logarithmic in n are interesting. An illustrative function is presented for which newly developed proof methods show that the (1,lambda) EA - where lambda is logarithmic in n - outperforms the (1+lambda) EA for any lambda. For smaller offspring populations the (1,lambda) EA is inefficient on every function with a unique optimum, whereas for larger lambda the two randomized search heuristics behave almost equivalently
    corecore