40 research outputs found

    Speeding Up Evolutionary Multi-objective Optimisation Through Diversity-Based Parent Selection

    Get PDF
    Parent selection in evolutionary algorithms for multi-objective optimization is usually performed by dominance mechanisms or indicator functions that prefer non-dominated points, while the reproduction phase involves the application of diversity mechanisms or other methods to achieve a good spread of the population along the Pareto front. We propose to refine the parent selection on evolutionary multi-objective optimization with diversity-based metrics. The aim is to focus on individuals with a high diversity contribution located in poorly explored areas of the search space, so the chances of creating new non-dominated individuals are better than in highly populated areas. We show by means of rigorous runtime analysis that the use of diversity-based parent selection mechanisms in the Simple Evolutionary Multi-objective Optimiser (SEMO) and Global SEMO for the well known bi-objective functions OneMinMax and Lotz can significantly improve their performance. Our theoretical results are accompanied by additional experiments that show a correspondence between theory and empirical results

    On the Runtime Analysis of the Clearing Diversity-Preserving Mechanism

    Get PDF
    Clearing is a niching method inspired by the principle of assigning the available resources among a niche to a single individual. The clearing procedure supplies these resources only to the best individual of each niche: the winner. So far, its analysis has been focused on experimental approaches that have shown that clearing is a powerful diversity-preserving mechanism. Using rigorous runtime analysis to explain how and why it is a powerful method, we prove that a mutation-based evolutionary algorithm with a large enough population size, and a phenotypic distance function always succeeds in optimising all functions of unitation for small niches in polynomial time, while a genotypic distance function requires exponential time. Finally, we prove that with phenotypic and genotypic distances clearing is able to find both optima for Twomax and several general classes of bimodal functions in polynomial expected time. We use empirical analysis to highlight some of the characteristics that makes it a useful mechanism and to support the theoretical results

    Design and analysis of diversity-based parent selection schemes for speeding up evolutionary multi-objective optimisation

    Get PDF
    Parent selection in evolutionary algorithms for multi-objective optimisation is usually performed by dominance mechanisms or indicator functions that prefer non-dominated points. We propose to refine the parent selection on evolutionary multi-objective optimisation with diversity-based metrics. The aim is to focus on individuals with a high diversity contribution located in poorly explored areas of the search space, so the chances of creating new non-dominated individuals are better than in highly populated areas. We show by means of rigorous runtime analysis that the use of diversity-based parent selection mechanisms in the Simple Evolutionary Multi-objective Optimiser (SEMO) and Global SEMO for the well known bi-objective functions OneMinMax and LOTZ can significantly improve their performance. Our theoretical results are accompanied by experimental studies that show a correspondence between theory and empirical results and motivate further theoretical investigations in terms of stagnation. We show that stagnation might occur when favouring individuals with a high diversity contribution in the parent selection step and provide a discussion on which scheme to use for more complex problems based on our theoretical and experimental results

    The Univariate Marginal Distribution Algorithm Copes Well With Deception and Epistasis

    Full text link
    In their recent work, Lehre and Nguyen (FOGA 2019) show that the univariate marginal distribution algorithm (UMDA) needs time exponential in the parent populations size to optimize the DeceptiveLeadingBlocks (DLB) problem. They conclude from this result that univariate EDAs have difficulties with deception and epistasis. In this work, we show that this negative finding is caused by an unfortunate choice of the parameters of the UMDA. When the population sizes are chosen large enough to prevent genetic drift, then the UMDA optimizes the DLB problem with high probability with at most λ(n2+2eln⁥n)\lambda(\frac{n}{2} + 2 e \ln n) fitness evaluations. Since an offspring population size λ\lambda of order nlog⁥nn \log n can prevent genetic drift, the UMDA can solve the DLB problem with O(n2log⁥n)O(n^2 \log n) fitness evaluations. In contrast, for classic evolutionary algorithms no better run time guarantee than O(n3)O(n^3) is known (which we prove to be tight for the (1+1){(1+1)} EA), so our result rather suggests that the UMDA can cope well with deception and epistatis. From a broader perspective, our result shows that the UMDA can cope better with local optima than evolutionary algorithms; such a result was previously known only for the compact genetic algorithm. Together with the lower bound of Lehre and Nguyen, our result for the first time rigorously proves that running EDAs in the regime with genetic drift can lead to drastic performance losses

    How to Escape Local Optima in Black Box Optimisation: When Non-elitism Outperforms Elitism

    Get PDF
    Escaping local optima is one of the major obstacles to function optimisation. Using the metaphor of a fitness landscape, local optima correspond to hills separated by fitness valleys that have to be overcome. We define a class of fitness valleys of tunable difficulty by considering their length, representing the Hamming path between the two optima and their depth, the drop in fitness. For this function class we present a runtime comparison between stochastic search algorithms using different search strategies. The ((Formula presented.)) EA is a simple and well-studied evolutionary algorithm that has to jump across the valley to a point of higher fitness because it does not accept worsening moves (elitism). In contrast, the Metropolis algorithm and the Strong Selection Weak Mutation (SSWM) algorithm, a famous process in population genetics, are both able to cross the fitness valley by accepting worsening moves. We show that the runtime of the ((Formula presented.)) EA depends critically on the length of the valley while the runtimes of the non-elitist algorithms depend crucially on the depth of the valley. Moreover, we show that both SSWM and Metropolis can also efficiently optimise a rugged function consisting of consecutive valleys

    On the choice of the update strength in estimation-of-distribution algorithms and ant colony optimization

    Get PDF
    Probabilistic model-building Genetic Algorithms (PMBGAs) are a class of metaheuristics that evolve probability distributions favoring optimal solutions in the underlying search space by repeatedly sampling from the distribution and updating it according to promising samples. We provide a rigorous runtime analysis concerning the update strength, a vital parameter in PMBGAs such as the step size 1 / K in the so-called compact Genetic Algorithm (cGA) and the evaporation factor ρ in ant colony optimizers (ACO). While a large update strength is desirable for exploitation, there is a general trade-off: too strong updates can lead to unstable behavior and possibly poor performance. We demonstrate this trade-off for the cGA and a simple ACO algorithm on the well-known OneMax function. More precisely, we obtain lower bounds on the expected runtime of Ω(Kn−−√+nlogn) and Ω(n−−√/ρ+nlogn), respectively, suggesting that the update strength should be limited to 1/K,ρ=O(1/(n−−√logn)). In fact, choosing 1/K,ρ∌1/(n−−√logn) both algorithms efficiently optimize OneMax in expected time Θ(nlogn). Our analyses provide new insights into the stochastic behavior of PMBGAs and propose new guidelines for setting the update strength in global optimization

    On the Analysis of Trajectory-Based Search Algorithms: When is it Beneficial to Reject Improvements?

    Get PDF
    We investigate popular trajectory-based algorithms inspired by biology and physics to answer a question of general significance: when is it beneficial to reject improvements? A distinguishing factor of SSWM (strong selection weak mutation), a popular model from population genetics, compared to the Metropolis algorithm (MA), is that the former can reject improvements, while the latter always accepts them. We investigate when one strategy outperforms the other. Since we prove that both algorithms converge to the same stationary distribution, we concentrate on identifying a class of functions inducing large mixing times, where the algorithms will outperform each other over a long period of time. The outcome of the analysis is the definition of a function where SSWM is efficient, while Metropolis requires at least exponential time. The identified function favours algorithms that prefer high quality improvements over smaller ones, revealing similarities in the optimisation strategies of SSWM and Metropolis respectively with best-improvement (BILS) and first-improvement (FILS) local search. We conclude the paper with a comparison of the performance of these algorithms and a (1, λ ) RLS on the identified function. The algorithm favours the steepest gradient with a probability that increases with the size of its offspring population. The results confirm that BILS excels and that the (1, λ ) RLS is efficient only for large enough population sizes

    Applications development for the computational grid

    No full text
    The Computational Grid has promised a great deal in support of innovative applications, particularly in science and engineering. However, developing applications for this highly distributed, and often faulty, infrastructure can be demanding. Often it can take as long to set up a computational experiment as it does to execute it. Clearly we need to be more efficient if the Grid is to deliver useful results to applications scientists and engineers. In this paper I will present a raft of upper middleware services and tools aimed at solving the software engineering challenges in building real applications

    Model optimization and parameter estimation with nimrod/o

    No full text
    Abstract. Optimization problems where the evaluation step is computationally intensive are becoming increasingly common in both engineering design and model parameter estimation. We describe a tool, Nimrod/O, that expedites the solution of such problems by performing evaluations concurrently, utilizing a range of platforms from workstations to widely distributed parallel machines. Nimrod/O offers a range of optimization algorithms adapted to take advantage of parallel batches of evaluations. We describe a selection of case studies where Nimrod/O has been successfully applied, showing the parallelism achieved by this approach.
    corecore