4 research outputs found

    SACOBRA with Online Whitening for Solving Optimization Problems with High Conditioning

    Get PDF
    Real-world optimization problems often have expensive objective functions in terms of cost and time. It is desirable to find near-optimal solutions with very few function evaluations. Surrogate-assisted optimizers tend to reduce the required number of function evaluations by replacing the real function with an efficient mathematical model built on few evaluated points. Problems with a high condition number are a challenge for many surrogate-assisted optimizers including SACOBRA. To address such problems we propose a new online whitening operating in the black-box optimization paradigm. We show on a set of high-conditioning functions that online whitening tackles SACOBRA's early stagnation issue and reduces the optimization error by a factor between 10 to 1e12 as compared to the plain SACOBRA, though it imposes many extra function evaluations. Covariance matrix adaptation evolution strategy (CMA-ES) has for very high numbers of function evaluations even lower errors, whereas SACOBRA performs better in the expensive setting (less than 1e03 function evaluations). If we count all parallelizable function evaluations (population evaluation in CMA-ES, online whitening in our approach) as one iteration, then both algorithms have comparable strength even on the long run. This holds for problems with dimension D Algorithms and the Foundations of Software technolog

    Parallel Evolutionary Algorithms Performing Pairwise Comparisons

    Get PDF
    International audienceWe study mathematically and experimentally the conver-gence rate of differential evolution and particle swarm opti-mization for simple unimodal functions. Due to paralleliza-tion concerns, the focus is on lower bounds on the runtime, i.e upper bounds on the speed-up, as a function of the pop-ulation size. Two cases are particularly relevant: A popula-tion size of the same order of magnitude as the dimension and larger population sizes. We use the branching factor as a tool for proving bounds and get, as upper bounds, a lin-ear speed-up for a population size similar to the dimension, and a logarithmic speed-up for larger population sizes. We then propose parametrizations for differential evolution and particle swarm optimization that reach these bounds
    corecore