1,004 research outputs found

    A Study on Multimemetic Estimation of Distribution Algorithms

    Get PDF
    PPSN 2014, LNCS 8672, pp. 322-331Multimemetic algorithms (MMAs) are memetic algorithms in which memes (interpreted as non-genetic expressions of problem solving strategies) are explicitly represented and evolved alongside genotypes. This process is commonly approached using the standard genetic procedures of recombination and mutation to manipulate directly information at the memetic level. We consider an alternative approach based on the use of estimation of distribution algorithms to carry on this self-adaptive memetic optimization process. We study the application of different EDAs to this end, and provide an extensive experimental evaluation. It is shown that elitism is essential to achieve top performance, and that elitist versions of multimemetic EDAs using bivariate probabilistic models are capable of outperforming genetic MMAs.This work is partially supported by MICINN project ANYSELF (TIN2011-28627-C04-01), by Junta de Andalucía project DNEMESIS (P10-TIC-6083) and by Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech

    The Univariate Marginal Distribution Algorithm Copes Well With Deception and Epistasis

    Full text link
    In their recent work, Lehre and Nguyen (FOGA 2019) show that the univariate marginal distribution algorithm (UMDA) needs time exponential in the parent populations size to optimize the DeceptiveLeadingBlocks (DLB) problem. They conclude from this result that univariate EDAs have difficulties with deception and epistasis. In this work, we show that this negative finding is caused by an unfortunate choice of the parameters of the UMDA. When the population sizes are chosen large enough to prevent genetic drift, then the UMDA optimizes the DLB problem with high probability with at most λ(n2+2elnn)\lambda(\frac{n}{2} + 2 e \ln n) fitness evaluations. Since an offspring population size λ\lambda of order nlognn \log n can prevent genetic drift, the UMDA can solve the DLB problem with O(n2logn)O(n^2 \log n) fitness evaluations. In contrast, for classic evolutionary algorithms no better run time guarantee than O(n3)O(n^3) is known (which we prove to be tight for the (1+1){(1+1)} EA), so our result rather suggests that the UMDA can cope well with deception and epistatis. From a broader perspective, our result shows that the UMDA can cope better with local optima than evolutionary algorithms; such a result was previously known only for the compact genetic algorithm. Together with the lower bound of Lehre and Nguyen, our result for the first time rigorously proves that running EDAs in the regime with genetic drift can lead to drastic performance losses
    corecore