560 research outputs found

    On the effective and automatic enumeration of polynomial permutation classes

    Full text link
    We describe an algorithm, implemented in Python, which can enumerate any permutation class with polynomial enumeration from a structural description of the class. In particular, this allows us to find formulas for the number of permutations of length n which can be obtained by a finite number of block sorting operations (e.g., reversals, block transpositions, cut-and-paste moves)

    On Unconstrained Quasi-Submodular Function Optimization

    Full text link
    With the extensive application of submodularity, its generalizations are constantly being proposed. However, most of them are tailored for special problems. In this paper, we focus on quasi-submodularity, a universal generalization, which satisfies weaker properties than submodularity but still enjoys favorable performance in optimization. Similar to the diminishing return property of submodularity, we first define a corresponding property called the {\em single sub-crossing}, then we propose two algorithms for unconstrained quasi-submodular function minimization and maximization, respectively. The proposed algorithms return the reduced lattices in O(n)\mathcal{O}(n) iterations, and guarantee the objective function values are strictly monotonically increased or decreased after each iteration. Moreover, any local and global optima are definitely contained in the reduced lattices. Experimental results verify the effectiveness and efficiency of the proposed algorithms on lattice reduction.Comment: 11 page

    Hardest Monotone Functions for Evolutionary Algorithms

    Full text link
    The study of hardest and easiest fitness landscapes is an active area of research. Recently, Kaufmann, Larcher, Lengler and Zou conjectured that for the self-adjusting (1,λ)(1,\lambda)-EA, Adversarial Dynamic BinVal (ADBV) is the hardest dynamic monotone function to optimize. We introduce the function Switching Dynamic BinVal (SDBV) which coincides with ADBV whenever the number of remaining zeros in the search point is strictly less than n/2n/2, where nn denotes the dimension of the search space. We show, using a combinatorial argument, that for the (1+1)(1+1)-EA with any mutation rate p∈[0,1]p \in [0,1], SDBV is drift-minimizing among the class of dynamic monotone functions. Our construction provides the first explicit example of an instance of the partially-ordered evolutionary algorithm (PO-EA) model with parameterized pessimism introduced by Colin, Doerr and F\'erey, building on work of Jansen. We further show that the (1+1)(1+1)-EA optimizes SDBV in Θ(n3/2)\Theta(n^{3/2}) generations. Our simulations demonstrate matching runtimes for both static and self-adjusting (1,λ)(1,\lambda) and (1+λ)(1+\lambda)-EA. We further show, using an example of fixed dimension, that drift-minimization does not equal maximal runtime

    When Does Hillclimbing Fail on Monotone Functions: An entropy compression argument

    Full text link
    Hillclimbing is an essential part of any optimization algorithm. An important benchmark for hillclimbing algorithms on pseudo-Boolean functions f:{0,1}nβ†’Rf: \{0,1\}^n \to \mathbb{R} are (strictly) montone functions, on which a surprising number of hillclimbers fail to be efficient. For example, the (1+1)(1+1)-Evolutionary Algorithm is a standard hillclimber which flips each bit independently with probability c/nc/n in each round. Perhaps surprisingly, this algorithm shows a phase transition: it optimizes any monotone pseudo-boolean function in quasilinear time if c<1c<1, but there are monotone functions for which the algorithm needs exponential time if c>2.2c>2.2. But so far it was unclear whether the threshold is at c=1c=1. In this paper we show how Moser's entropy compression argument can be adapted to this situation, that is, we show that a long runtime would allow us to encode the random steps of the algorithm with less bits than their entropy. Thus there exists a c0>1c_0 > 1 such that for all 0<c≀c00<c\le c_0 the (1+1)(1+1)-Evolutionary Algorithm with rate c/nc/n finds the optimum in O(nlog⁑2n)O(n \log^2 n) steps in expectation.Comment: 14 pages, no figure

    Runtime Analysis of Quality Diversity Algorithms

    Full text link
    Quality diversity~(QD) is a branch of evolutionary computation that gained increasing interest in recent years. The Map-Elites QD approach defines a feature space, i.e., a partition of the search space, and stores the best solution for each cell of this space. We study a simple QD algorithm in the context of pseudo-Boolean optimisation on the ``number of ones'' feature space, where the iith cell stores the best solution amongst those with a number of ones in [(iβˆ’1)k,ikβˆ’1][(i-1)k, ik-1]. Here kk is a granularity parameter 1≀k≀n+11 \leq k \leq n+1. We give a tight bound on the expected time until all cells are covered for arbitrary fitness functions and for all kk and analyse the expected optimisation time of QD on \textsc{OneMax} and other problems whose structure aligns favourably with the feature space. On combinatorial problems we show that QD finds a (1βˆ’1/e){(1-1/e)}-approximation when maximising any monotone sub-modular function with a single uniform cardinality constraint efficiently. Defining the feature space as the number of connected components of a connected graph, we show that QD finds a minimum spanning tree in expected polynomial time

    Self-adjusting Population Sizes for the (1,Ξ»)(1, \lambda)-EA on Monotone Functions

    Full text link
    We study the (1,Ξ»)(1,\lambda)-EA with mutation rate c/nc/n for c≀1c\le 1, where the population size is adaptively controlled with the (1:s+1)(1:s+1)-success rule. Recently, Hevia Fajardo and Sudholt have shown that this setup with c=1c=1 is efficient on \onemax for s<1s<1, but inefficient if sβ‰₯18s \ge 18. Surprisingly, the hardest part is not close to the optimum, but rather at linear distance. We show that this behavior is not specific to \onemax. If ss is small, then the algorithm is efficient on all monotone functions, and if ss is large, then it needs superpolynomial time on all monotone functions. In the former case, for c<1c<1 we show a O(n)O(n) upper bound for the number of generations and O(nlog⁑n)O(n\log n) for the number of function evaluations, and for c=1c=1 we show O(nlog⁑n)O(n\log n) generations and O(n2log⁑log⁑n)O(n^2\log\log n) evaluations. We also show formally that optimization is always fast, regardless of ss, if the algorithm starts in proximity of the optimum. All results also hold in a dynamic environment where the fitness function changes in each generation
    • …
    corecore