560 research outputs found
On the effective and automatic enumeration of polynomial permutation classes
We describe an algorithm, implemented in Python, which can enumerate any
permutation class with polynomial enumeration from a structural description of
the class. In particular, this allows us to find formulas for the number of
permutations of length n which can be obtained by a finite number of block
sorting operations (e.g., reversals, block transpositions, cut-and-paste
moves)
On Unconstrained Quasi-Submodular Function Optimization
With the extensive application of submodularity, its generalizations are
constantly being proposed. However, most of them are tailored for special
problems. In this paper, we focus on quasi-submodularity, a universal
generalization, which satisfies weaker properties than submodularity but still
enjoys favorable performance in optimization. Similar to the diminishing return
property of submodularity, we first define a corresponding property called the
{\em single sub-crossing}, then we propose two algorithms for unconstrained
quasi-submodular function minimization and maximization, respectively. The
proposed algorithms return the reduced lattices in iterations,
and guarantee the objective function values are strictly monotonically
increased or decreased after each iteration. Moreover, any local and global
optima are definitely contained in the reduced lattices. Experimental results
verify the effectiveness and efficiency of the proposed algorithms on lattice
reduction.Comment: 11 page
Hardest Monotone Functions for Evolutionary Algorithms
The study of hardest and easiest fitness landscapes is an active area of
research. Recently, Kaufmann, Larcher, Lengler and Zou conjectured that for the
self-adjusting -EA, Adversarial Dynamic BinVal (ADBV) is the
hardest dynamic monotone function to optimize. We introduce the function
Switching Dynamic BinVal (SDBV) which coincides with ADBV whenever the number
of remaining zeros in the search point is strictly less than , where
denotes the dimension of the search space. We show, using a combinatorial
argument, that for the -EA with any mutation rate , SDBV is
drift-minimizing among the class of dynamic monotone functions. Our
construction provides the first explicit example of an instance of the
partially-ordered evolutionary algorithm (PO-EA) model with parameterized
pessimism introduced by Colin, Doerr and F\'erey, building on work of Jansen.
We further show that the -EA optimizes SDBV in
generations. Our simulations demonstrate matching runtimes for both static and
self-adjusting and -EA. We further show, using an
example of fixed dimension, that drift-minimization does not equal maximal
runtime
When Does Hillclimbing Fail on Monotone Functions: An entropy compression argument
Hillclimbing is an essential part of any optimization algorithm. An important
benchmark for hillclimbing algorithms on pseudo-Boolean functions are (strictly) montone functions, on which a surprising number
of hillclimbers fail to be efficient. For example, the -Evolutionary
Algorithm is a standard hillclimber which flips each bit independently with
probability in each round. Perhaps surprisingly, this algorithm shows a
phase transition: it optimizes any monotone pseudo-boolean function in
quasilinear time if , but there are monotone functions for which the
algorithm needs exponential time if . But so far it was unclear whether
the threshold is at .
In this paper we show how Moser's entropy compression argument can be adapted
to this situation, that is, we show that a long runtime would allow us to
encode the random steps of the algorithm with less bits than their entropy.
Thus there exists a such that for all the
-Evolutionary Algorithm with rate finds the optimum in steps in expectation.Comment: 14 pages, no figure
Runtime Analysis of Quality Diversity Algorithms
Quality diversity~(QD) is a branch of evolutionary computation that gained
increasing interest in recent years. The Map-Elites QD approach defines a
feature space, i.e., a partition of the search space, and stores the best
solution for each cell of this space. We study a simple QD algorithm in the
context of pseudo-Boolean optimisation on the ``number of ones'' feature space,
where the th cell stores the best solution amongst those with a number of
ones in . Here is a granularity parameter . We give a tight bound on the expected time until all cells are covered
for arbitrary fitness functions and for all and analyse the expected
optimisation time of QD on \textsc{OneMax} and other problems whose structure
aligns favourably with the feature space. On combinatorial problems we show
that QD finds a -approximation when maximising any monotone
sub-modular function with a single uniform cardinality constraint efficiently.
Defining the feature space as the number of connected components of a connected
graph, we show that QD finds a minimum spanning tree in expected polynomial
time
Self-adjusting Population Sizes for the -EA on Monotone Functions
We study the -EA with mutation rate for , where
the population size is adaptively controlled with the -success rule.
Recently, Hevia Fajardo and Sudholt have shown that this setup with is
efficient on \onemax for , but inefficient if . Surprisingly,
the hardest part is not close to the optimum, but rather at linear distance. We
show that this behavior is not specific to \onemax. If is small, then the
algorithm is efficient on all monotone functions, and if is large, then it
needs superpolynomial time on all monotone functions. In the former case, for
we show a upper bound for the number of generations and for the number of function evaluations, and for we show
generations and evaluations. We also show formally that
optimization is always fast, regardless of , if the algorithm starts in
proximity of the optimum. All results also hold in a dynamic environment where
the fitness function changes in each generation
- β¦