506 research outputs found

    On Single-Objective Sub-Graph-Based Mutation for Solving the Bi-Objective Minimum Spanning Tree Problem

    Full text link
    We contribute to the efficient approximation of the Pareto-set for the classical NP\mathcal{NP}-hard multi-objective minimum spanning tree problem (moMST) adopting evolutionary computation. More precisely, by building upon preliminary work, we analyse the neighborhood structure of Pareto-optimal spanning trees and design several highly biased sub-graph-based mutation operators founded on the gained insights. In a nutshell, these operators replace (un)connected sub-trees of candidate solutions with locally optimal sub-trees. The latter (biased) step is realized by applying Kruskal's single-objective MST algorithm to a weighted sum scalarization of a sub-graph. We prove runtime complexity results for the introduced operators and investigate the desirable Pareto-beneficial property. This property states that mutants cannot be dominated by their parent. Moreover, we perform an extensive experimental benchmark study to showcase the operator's practical suitability. Our results confirm that the sub-graph based operators beat baseline algorithms from the literature even with severely restricted computational budget in terms of function evaluations on four different classes of complete graphs with different shapes of the Pareto-front

    Performance Analysis of Evolutionary Algorithms for the Minimum Label Spanning Tree Problem

    Get PDF
    Some experimental investigations have shown that evolutionary algorithms (EAs) are efficient for the minimum label spanning tree (MLST) problem. However, we know little about that in theory. As one step towards this issue, we theoretically analyze the performances of the (1+1) EA, a simple version of EAs, and a multi-objective evolutionary algorithm called GSEMO on the MLST problem. We reveal that for the MLSTb_{b} problem the (1+1) EA and GSEMO achieve a b+12\frac{b+1}{2}-approximation ratio in expected polynomial times of nn the number of nodes and kk the number of labels. We also show that GSEMO achieves a (2ln(n))(2ln(n))-approximation ratio for the MLST problem in expected polynomial time of nn and kk. At the same time, we show that the (1+1) EA and GSEMO outperform local search algorithms on three instances of the MLST problem. We also construct an instance on which GSEMO outperforms the (1+1) EA

    Plateaus can be harder in multi-objective optimization

    Get PDF
    AbstractIn recent years a lot of progress has been made in understanding the behavior of evolutionary computation methods for single- and multi-objective problems. Our aim is to analyze the diversity mechanisms that are implicitly used in evolutionary algorithms for multi-objective problems by rigorous runtime analyses. We show that, even if the population size is small, the runtime can be exponential where corresponding single-objective problems are optimized within polynomial time. To illustrate this behavior we analyze a simple plateau function in a first step and extend our result to a class of instances of the well-known SetCover problem

    Lazy Parameter Tuning and Control:Choosing All Parameters Randomly from a Power-Law Distribution

    Get PDF
    Most evolutionary algorithms have multiple parameters and their values drastically affect the performance. Due to the often complicated interplay of the parameters, setting these values right for a particular problem (parameter tuning) is a challenging task. This task becomes even more complicated when the optimal parameter values change significantly during the run of the algorithm since then a dynamic parameter choice (parameter control) is necessary. In this work, we propose a lazy but effective solution, namely choosing all parameter values (where this makes sense) in each iteration randomly from a suitably scaled power-law distribution. To demonstrate the effectiveness of this approach, we perform runtime analyses of the (1+(λ,λ))(1+(\lambda,\lambda)) genetic algorithm with all three parameters chosen in this manner. We show that this algorithm on the one hand can imitate simple hill-climbers like the (1+1)(1+1) EA, giving the same asymptotic runtime on problems like OneMax, LeadingOnes, or Minimum Spanning Tree. On the other hand, this algorithm is also very efficient on jump functions, where the best static parameters are very different from those necessary to optimize simple problems. We prove a performance guarantee that is comparable, sometimes even better, than the best performance known for static parameters. We complement our theoretical results with a rigorous empirical study confirming what the asymptotic runtime results suggest.Comment: Extended version of the paper accepted to GECCO 2021, including all the proofs omitted in the conference versio

    The (1+(λ,λ)) Genetic Algorithm on the Vertex Cover Problem:Crossover Helps Leaving Plateaus

    Get PDF

    The First Proven Performance Guarantees for the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) on a Combinatorial Optimization Problem

    Full text link
    The Non-dominated Sorting Genetic Algorithm-II (NSGA-II) is one of the most prominent algorithms to solve multi-objective optimization problems. Recently, the first mathematical runtime guarantees have been obtained for this algorithm, however only for synthetic benchmark problems. In this work, we give the first proven performance guarantees for a classic optimization problem, the NP-complete bi-objective minimum spanning tree problem. More specifically, we show that the NSGA-II with population size N4((n1)wmax+1)N \ge 4((n-1) w_{\max} + 1) computes all extremal points of the Pareto front in an expected number of O(m2nwmaxlog(nwmax))O(m^2 n w_{\max} \log(n w_{\max})) iterations, where nn is the number of vertices, mm the number of edges, and wmaxw_{\max} is the maximum edge weight in the problem instance. This result confirms, via mathematical means, the good performance of the NSGA-II observed empirically. It also shows that mathematical analyses of this algorithm are not only possible for synthetic benchmark problems, but also for more complex combinatorial optimization problems. As a side result, we also obtain a new analysis of the performance of the global SEMO algorithm on the bi-objective minimum spanning tree problem, which improves the previous best result by a factor of F|F|, the number of extremal points of the Pareto front, a set that can be as large as nwmaxn w_{\max}. The main reason for this improvement is our observation that both multi-objective evolutionary algorithms find the different extremal points in parallel rather than sequentially, as assumed in the previous proofs.Comment: Author-generated version of a paper appearing in the proceedings of IJCAI 202
    corecore