506 research outputs found
On Single-Objective Sub-Graph-Based Mutation for Solving the Bi-Objective Minimum Spanning Tree Problem
We contribute to the efficient approximation of the Pareto-set for the
classical -hard multi-objective minimum spanning tree problem
(moMST) adopting evolutionary computation. More precisely, by building upon
preliminary work, we analyse the neighborhood structure of Pareto-optimal
spanning trees and design several highly biased sub-graph-based mutation
operators founded on the gained insights. In a nutshell, these operators
replace (un)connected sub-trees of candidate solutions with locally optimal
sub-trees. The latter (biased) step is realized by applying Kruskal's
single-objective MST algorithm to a weighted sum scalarization of a sub-graph.
We prove runtime complexity results for the introduced operators and
investigate the desirable Pareto-beneficial property. This property states that
mutants cannot be dominated by their parent. Moreover, we perform an extensive
experimental benchmark study to showcase the operator's practical suitability.
Our results confirm that the sub-graph based operators beat baseline algorithms
from the literature even with severely restricted computational budget in terms
of function evaluations on four different classes of complete graphs with
different shapes of the Pareto-front
Performance Analysis of Evolutionary Algorithms for the Minimum Label Spanning Tree Problem
Some experimental investigations have shown that evolutionary algorithms
(EAs) are efficient for the minimum label spanning tree (MLST) problem.
However, we know little about that in theory. As one step towards this issue,
we theoretically analyze the performances of the (1+1) EA, a simple version of
EAs, and a multi-objective evolutionary algorithm called GSEMO on the MLST
problem. We reveal that for the MLST problem the (1+1) EA and GSEMO
achieve a -approximation ratio in expected polynomial times of
the number of nodes and the number of labels. We also show that GSEMO
achieves a -approximation ratio for the MLST problem in expected
polynomial time of and . At the same time, we show that the (1+1) EA and
GSEMO outperform local search algorithms on three instances of the MLST
problem. We also construct an instance on which GSEMO outperforms the (1+1) EA
Plateaus can be harder in multi-objective optimization
AbstractIn recent years a lot of progress has been made in understanding the behavior of evolutionary computation methods for single- and multi-objective problems. Our aim is to analyze the diversity mechanisms that are implicitly used in evolutionary algorithms for multi-objective problems by rigorous runtime analyses. We show that, even if the population size is small, the runtime can be exponential where corresponding single-objective problems are optimized within polynomial time. To illustrate this behavior we analyze a simple plateau function in a first step and extend our result to a class of instances of the well-known SetCover problem
Lazy Parameter Tuning and Control:Choosing All Parameters Randomly from a Power-Law Distribution
Most evolutionary algorithms have multiple parameters and their values
drastically affect the performance. Due to the often complicated interplay of
the parameters, setting these values right for a particular problem (parameter
tuning) is a challenging task. This task becomes even more complicated when the
optimal parameter values change significantly during the run of the algorithm
since then a dynamic parameter choice (parameter control) is necessary.
In this work, we propose a lazy but effective solution, namely choosing all
parameter values (where this makes sense) in each iteration randomly from a
suitably scaled power-law distribution. To demonstrate the effectiveness of
this approach, we perform runtime analyses of the
genetic algorithm with all three parameters chosen in this manner. We show that
this algorithm on the one hand can imitate simple hill-climbers like the
EA, giving the same asymptotic runtime on problems like OneMax,
LeadingOnes, or Minimum Spanning Tree. On the other hand, this algorithm is
also very efficient on jump functions, where the best static parameters are
very different from those necessary to optimize simple problems. We prove a
performance guarantee that is comparable, sometimes even better, than the best
performance known for static parameters. We complement our theoretical results
with a rigorous empirical study confirming what the asymptotic runtime results
suggest.Comment: Extended version of the paper accepted to GECCO 2021, including all
the proofs omitted in the conference versio
The First Proven Performance Guarantees for the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) on a Combinatorial Optimization Problem
The Non-dominated Sorting Genetic Algorithm-II (NSGA-II) is one of the most
prominent algorithms to solve multi-objective optimization problems. Recently,
the first mathematical runtime guarantees have been obtained for this
algorithm, however only for synthetic benchmark problems.
In this work, we give the first proven performance guarantees for a classic
optimization problem, the NP-complete bi-objective minimum spanning tree
problem. More specifically, we show that the NSGA-II with population size computes all extremal points of the Pareto front in
an expected number of iterations, where
is the number of vertices, the number of edges, and is the
maximum edge weight in the problem instance. This result confirms, via
mathematical means, the good performance of the NSGA-II observed empirically.
It also shows that mathematical analyses of this algorithm are not only
possible for synthetic benchmark problems, but also for more complex
combinatorial optimization problems.
As a side result, we also obtain a new analysis of the performance of the
global SEMO algorithm on the bi-objective minimum spanning tree problem, which
improves the previous best result by a factor of , the number of extremal
points of the Pareto front, a set that can be as large as . The
main reason for this improvement is our observation that both multi-objective
evolutionary algorithms find the different extremal points in parallel rather
than sequentially, as assumed in the previous proofs.Comment: Author-generated version of a paper appearing in the proceedings of
IJCAI 202
- …