10,984 research outputs found
Runtime Analysis of a Simple Multi-Objective Evolutionary Algorithm
Practical knowledge on the design and
application of multi-objective evolutionary
algorithms (MOEAs) is available but well-founded
theoretical analyses of the runtime are rare.
Laumanns, Thiele, Zitzler, Welzel and Deb (2002)
have started such an analysis for two simple
mutation-based algorithms including SEMO.
These algorithms search locally in the
neighborhood of their current population by
selecting an individual and flipping one
randomly chosen bit. Due to its local search
operator, SEMO cannot escape from local optima,
and, therefore, has no finite expected runtime
in general.
In this talk, we investigate the runtime of
a variant of SEMO whose mutation operator
flips each bit independently. It is proven
that its expected runtime is O(n^n) for all
objective functions f: {0,1}^n -> R^m, and
that there are bicriteria problems among the
hardest problem for this algorithm. Moreover,
for each d between 2 and n, a bicriteria
problem with expected runtime Theta(n^d) is
presented. This shows that bicriteria problems
cover the full range of potential runtimes of
this variant of SEMO. For the problem LOTZ
(Leading-Ones-Trailing Zeroes), the runtime
does not increase substantially if we use the
global search operator. Finally, we consider
the problem MOCO (Multi-Objective-Counting-Ones).
We show that the conjectured bound O((n^2)log n)
on the expected runtime is wrong for both
variants of SEMO. In fact, MOCO is almost a
worst case example for SEMO if we consider
the expected runtime; however, the runtime is
O((n^2)log n) with high probability. Some
ideas from the proof will be presented
The sodium-dependent di- and tricarboxylate transporter, NaCT, is not responsible for the uptake of D-, L-2-hydroxyglutarate and 3-hydroxyglutarate into neurons
Multi-objective evolutionary algorithms (MOEAs) have become increasingly popular as multi-objective problem solving techniques.
Most studies of MOEAs are empirical. Only recently, a few theoretical
results have appeared. It is acknowledged that more theoretical research
is needed. An important open problem is to understand the role of populations in MOEAs. We present a simple bi-objective problem which emphasizes when populations are needed. Rigorous runtime analysis point
out an exponential runtime gap between a population-based algorithm
(SEMO) and several single individual-based algorithms on this problem.
This means that among the algorithms considered, only the populationbased MOEA is successful and all other algorithms fail
Runtime Analyses of Multi-Objective Evolutionary Algorithms in the Presence of Noise
In single-objective optimization, it is well known that evolutionary
algorithms also without further adjustments can tolerate a certain amount of
noise in the evaluation of the objective function. In contrast, this question
is not at all understood for multi-objective optimization.
In this work, we conduct the first mathematical runtime analysis of a simple
multi-objective evolutionary algorithm (MOEA) on a classic benchmark in the
presence of noise in the objective functions. We prove that when bit-wise prior
noise with rate , a suitable constant, is present, the
\emph{simple evolutionary multi-objective optimizer} (SEMO) without any
adjustments to cope with noise finds the Pareto front of the OneMinMax
benchmark in time , just as in the case without noise. Given that
the problem here is to arrive at a population consisting of individuals
witnessing the Pareto front, this is a surprisingly strong robustness to noise
(comparably simple evolutionary algorithms cannot optimize the single-objective
OneMax problem in polynomial time when ). Our proofs
suggest that the strong robustness of the MOEA stems from its implicit
diversity mechanism designed to enable it to compute a population covering the
whole Pareto front.
Interestingly this result only holds when the objective value of a solution
is determined only once and the algorithm from that point on works with this,
possibly noisy, objective value. We prove that when all solutions are
reevaluated in each iteration, then any noise rate
leads to a super-polynomial runtime. This is very different from
single-objective optimization, where it is generally preferred to reevaluate
solutions whenever their fitness is important and where examples are known such
that not reevaluating solutions can lead to catastrophic performance losses.Comment: Appears at IJCAI 202
Combinatorial optimization and the analysis of randomized search heuristics
Randomized search heuristics have widely been applied to complex engineering problems as well as to problems from combinatorial optimization. We investigate the runtime behavior of randomized search heuristics and present runtime bounds for these heuristics on some well-known combinatorial optimization problems. Such analyses can help to understand better the working principle of these algorithms on combinatorial optimization problems as well as help to design better algorithms for a newly given problem. Our analyses mainly consider evolutionary algorithms that have achieved good results on a wide class of NP-hard combinatorial optimization problems. We start by analyzing some easy single-objective optimization problems such as the minimum spanning tree problem or the problem of computing an Eulerian cycle of a given Eulerian graph and prove bounds on the runtime of simple evolutionary algorithms. For the minimum spanning tree problem we also investigate a multi-objective model and show that randomized search heuristics find minimum spanning trees easier in this model than in a single-objective one. Many polynomial solvable problems become NP-hard when a second objective has to be optimized at the same time. We show that evolutionary algorithms are able to compute good approximations for such problems by examining the NP-hard multi-objective minimum spanning tree problem. Another kind of randomized search heuristic is ant colony optimization. Up to now no runtime bounds have been achieved for this kind of heuristic. We investigate a simple ant colony optimization algorithm and present a first runtime analysis. At the end we turn to classical approximation algorithms. Motivated by our investigations of randomized search heurisitics for the minimum spanning tree problem, we present a multi-objective model for NP-hard spanning tree problems and show that the model can help to speed up approximation algorithms for this kind of problems
Approximating the least hypervolume contributor: NP-hard in general, but fast in practice
The hypervolume indicator is an increasingly popular set measure to compare
the quality of two Pareto sets. The basic ingredient of most hypervolume
indicator based optimization algorithms is the calculation of the hypervolume
contribution of single solutions regarding a Pareto set. We show that exact
calculation of the hypervolume contribution is #P-hard while its approximation
is NP-hard. The same holds for the calculation of the minimal contribution. We
also prove that it is NP-hard to decide whether a solution has the least
hypervolume contribution. Even deciding whether the contribution of a solution
is at most (1+\eps) times the minimal contribution is NP-hard. This implies
that it is neither possible to efficiently find the least contributing solution
(unless ) nor to approximate it (unless ).
Nevertheless, in the second part of the paper we present a fast approximation
algorithm for this problem. We prove that for arbitrarily given \eps,\delta>0
it calculates a solution with contribution at most (1+\eps) times the minimal
contribution with probability at least . Though it cannot run in
polynomial time for all instances, it performs extremely fast on various
benchmark datasets. The algorithm solves very large problem instances which are
intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions)
within a few seconds.Comment: 22 pages, to appear in Theoretical Computer Scienc
Submodular memetic approximation for multiobjective parallel test paper generation
Parallel test paper generation is a biobjective distributed resource optimization problem, which aims to generate multiple similarly optimal test papers automatically according to multiple user-specified assessment criteria. Generating high-quality parallel test papers is challenging due to its NP-hardness in both of the collective objective functions. In this paper, we propose a submodular memetic approximation algorithm for solving this problem. The proposed algorithm is an adaptive memetic algorithm (MA), which exploits the submodular property of the collective objective functions to design greedy-based approximation algorithms for enhancing steps of the multiobjective MA. Synergizing the intensification of submodular local search mechanism with the diversification of the population-based submodular crossover operator, our algorithm can jointly optimize the total quality maximization objective and the fairness quality maximization objective. Our MA can achieve provable near-optimal solutions in a huge search space of large datasets in efficient polynomial runtime. Performance results on various datasets have shown that our algorithm has drastically outperformed the current techniques in terms of paper quality and runtime efficiency
An Exponential Lower Bound for the Runtime of the cGA on Jump Functions
In the first runtime analysis of an estimation-of-distribution algorithm
(EDA) on the multi-modal jump function class, Hasen\"ohrl and Sutton (GECCO
2018) proved that the runtime of the compact genetic algorithm with suitable
parameter choice on jump functions with high probability is at most polynomial
(in the dimension) if the jump size is at most logarithmic (in the dimension),
and is at most exponential in the jump size if the jump size is
super-logarithmic. The exponential runtime guarantee was achieved with a
hypothetical population size that is also exponential in the jump size.
Consequently, this setting cannot lead to a better runtime.
In this work, we show that any choice of the hypothetical population size
leads to a runtime that, with high probability, is at least exponential in the
jump size. This result might be the first non-trivial exponential lower bound
for EDAs that holds for arbitrary parameter settings.Comment: To appear in the Proceedings of FOGA 2019. arXiv admin note: text
overlap with arXiv:1903.1098
On the Runtime of Randomized Local Search and Simple Evolutionary Algorithms for Dynamic Makespan Scheduling
Evolutionary algorithms have been frequently used for dynamic optimization
problems. With this paper, we contribute to the theoretical understanding of
this research area. We present the first computational complexity analysis of
evolutionary algorithms for a dynamic variant of a classical combinatorial
optimization problem, namely makespan scheduling. We study the model of a
strong adversary which is allowed to change one job at regular intervals.
Furthermore, we investigate the setting of random changes. Our results show
that randomized local search and a simple evolutionary algorithm are very
effective in dynamically tracking changes made to the problem instance.Comment: Conference version appears at IJCAI 201
- …