3 research outputs found

    On the Robustness of Median Sampling in Noisy Evolutionary Optimization

    Full text link
    In real-world optimization tasks, the objective (i.e., fitness) function evaluation is often disturbed by noise due to a wide range of uncertainties. Evolutionary algorithms (EAs) have been widely applied to tackle noisy optimization, where reducing the negative effect of noise is a crucial issue. One popular strategy to cope with noise is sampling, which evaluates the fitness multiple times and uses the sample average to approximate the true fitness. In this paper, we introduce median sampling as a noise handling strategy into EAs, which uses the median of the multiple evaluations to approximate the true fitness instead of the mean. We theoretically show that median sampling can reduce the expected running time of EAs from exponential to polynomial by considering the (1+1)-EA on OneMax under the commonly used one-bit noise. We also compare mean sampling with median sampling by considering two specific noise models, suggesting that when the 2-quantile of the noisy fitness increases with the true fitness, median sampling can be a better choice. The results provide us with some guidance to employ median sampling efficiently in practice.Comment: 19 pages. arXiv admin note: text overlap with arXiv:1810.05045, arXiv:1711.0095

    Analysis of Noisy Evolutionary Optimization When Sampling Fails

    Full text link
    In noisy evolutionary optimization, sampling is a common strategy to deal with noise. By the sampling strategy, the fitness of a solution is evaluated multiple times (called \emph{sample size}) independently, and its true fitness is then approximated by the average of these evaluations. Previous studies on sampling are mainly empirical. In this paper, we first investigate the effect of sample size from a theoretical perspective. By analyzing the (1+1)-EA on the noisy LeadingOnes problem, we show that as the sample size increases, the running time can reduce from exponential to polynomial, but then return to exponential. This suggests that a proper sample size is crucial in practice. Then, we investigate what strategies can work when sampling with any fixed sample size fails. By two illustrative examples, we prove that using parent or offspring populations can be better. Finally, we construct an artificial noisy example to show that when using neither sampling nor populations is effective, adaptive sampling (i.e., sampling with an adaptive sample size) can work. This, for the first time, provides a theoretical support for the use of adaptive sampling

    Running Time Analysis of the (1+1)-EA for Robust Linear Optimization

    Full text link
    Evolutionary algorithms (EAs) have found many successful real-world applications, where the optimization problems are often subject to a wide range of uncertainties. To understand the practical behaviors of EAs theoretically, there are a series of efforts devoted to analyzing the running time of EAs for optimization under uncertainties. Existing studies mainly focus on noisy and dynamic optimization, while another common type of uncertain optimization, i.e., robust optimization, has been rarely touched. In this paper, we analyze the expected running time of the (1+1)-EA solving robust linear optimization problems (i.e., linear problems under robust scenarios) with a cardinality constraint kk. Two common robust scenarios, i.e., deletion-robust and worst-case, are considered. Particularly, we derive tight ranges of the robust parameter dd or budget kk allowing the (1+1)-EA to find an optimal solution in polynomial running time, which disclose the potential of EAs for robust optimization.Comment: 17 pages, 1 tabl
    corecore