509 research outputs found

    Runtime Analysis of Quality Diversity Algorithms

    Full text link
    Quality diversity~(QD) is a branch of evolutionary computation that gained increasing interest in recent years. The Map-Elites QD approach defines a feature space, i.e., a partition of the search space, and stores the best solution for each cell of this space. We study a simple QD algorithm in the context of pseudo-Boolean optimisation on the ``number of ones'' feature space, where the iith cell stores the best solution amongst those with a number of ones in [(i1)k,ik1][(i-1)k, ik-1]. Here kk is a granularity parameter 1kn+11 \leq k \leq n+1. We give a tight bound on the expected time until all cells are covered for arbitrary fitness functions and for all kk and analyse the expected optimisation time of QD on \textsc{OneMax} and other problems whose structure aligns favourably with the feature space. On combinatorial problems we show that QD finds a (11/e){(1-1/e)}-approximation when maximising any monotone sub-modular function with a single uniform cardinality constraint efficiently. Defining the feature space as the number of connected components of a connected graph, we show that QD finds a minimum spanning tree in expected polynomial time

    On the impact of the cutoff time on the performance of algorithm configurators

    Get PDF
    Algorithm conigurators are automated methods to optimise the parameters of an algorithm for a class of problems. We evaluate the performance of a simple random local search conigurator (Param- RLS) for tuning the neighbourhood size k of the RLS k algorithm. We measure performance as the expected number of coniguration evaluations required to identify the optimal value for the parameter. We analyse the impact of the cutof time κ (the time spent evaluat- ing a coniguration for a problem instance) on the expected number of coniguration evaluations required to ind the optimal parameter value, where we compare conigurations using either best found itness values (ParamRLS-F) or optimisation times (ParamRLS-T). We consider tuning RLS k for a variant of the Ridge function class ( Ridge* ), where the performance of each parameter value does not change during the run, and for the OneMax function class, where longer runs favour smaller k . We rigorously prove that ParamRLS- F eiciently tunes RLS k for Ridge* for any κ while ParamRLS-T requires at least quadratic κ . For OneMax ParamRLS-F identiies k = 1 as optimal with linear κ while ParamRLS-T requires a κ of at least Ω ( n log n ) . For smaller κ ParamRLS-F identiies that k > 1 performs better while ParamRLS-T returns k chosen uniformly at random

    Analysis of the (1+1) EA on LeadingOnes with Constraints

    Full text link
    Understanding how evolutionary algorithms perform on constrained problems has gained increasing attention in recent years. In this paper, we study how evolutionary algorithms optimize constrained versions of the classical LeadingOnes problem. We first provide a run time analysis for the classical (1+1) EA on the LeadingOnes problem with a deterministic cardinality constraint, giving Θ(n(nB)log(B)+n2)\Theta(n (n-B)\log(B) + n^2) as the tight bound. Our results show that the behaviour of the algorithm is highly dependent on the constraint bound of the uniform constraint. Afterwards, we consider the problem in the context of stochastic constraints and provide insights using experimental studies on how the (μ\mu+1) EA is able to deal with these constraints in a sampling-based setting

    Theoretical runtime bounds for information spreading and a new vehicle routing algorithm

    Get PDF

    Exploiting machine learning for combinatorial problem solving and optimisation

    Get PDF
    This dissertation presents a number of contributions to the field of solver portfolios, in particular for combinatorial search problems. We propose a novel hierarchical portfolio which does not rely on a single problem representation, but may transform the problem to an alternate representation using a portfolio of encodings, additionally a portfolio of solvers is employed for each of the representations. We extend this multi-representation portfolio for discrete optimisation tasks in the graphical models domain, realising a portfolio which won the UAI 2014 Inference Competition. We identify a fundamental flaw in empirical evaluations of many portfolio and runtime prediction methods. The fact that solvers exhibit a runtime distribution has not been considered in the setting of runtime prediction, solver portfolios, or automated configuration systems, to date these methods have taken a single sample as ground-truth. We demonstrated through a large empirical analysis that the outcome of empirical competitions can vary and provide statistical bounds on such variations. Finally, we consider an elastic solver which capitalises on the runtime distribution of a solver by launching searches in parallel, potentially on thousands of machines. We analyse the impact of the number of cores on not only solution time but also on energy consumption, the challenge being to find a optimal balance between the two. We highlight that although solution time always drops as the number of machines increases, the relation between the number of machines and energy consumption is more complicated. We also develop a prediction model, demonstrating that such insights can be exploited to achieve faster solutions times in a more energy efficient manner

    Markov-Chain-Based Heuristics for the Feedback Vertex Set Problem for Digraphs

    Get PDF
    A feedback vertex set (FVS) of an undirected or directed graph G=(V, A) is a set F such that G-F is acyclic. The minimum feedback vertex set problem asks for a FVS of G of minimum cardinality whereas the weighted minimum feedback vertex set problem consists of determining a FVS F of minimum weight w(F) given a real-valued weight function w. Both problems are NP-hard [Karp72]. Nethertheless, they have been found to have applications in many fields. So one is naturally interested in approximation algorithms. While most of the existing approximation algorithms for feedback vertex set problems rely on local properties of G only, this thesis explores strategies that use global information about G in order to determine good solutions. The pioneering work in this direction has been initiated by Speckenmeyer [Speckenmeyer89]. He demonstrated the use of Markov chains for determining low cardinality FVSs. Based on his ideas, new approximation algorithms are developed for both the unweighted and the weighted minimum feedback vertex set problem for digraphs. According to the experimental results presented in this thesis, these new algorithms outperform all other existing approximation algorithms. An additional contribution, not related to Markov chains, is the identification of a new class of digraphs G=(V, A) which permit the determination of an optimum FVS in time O(|V|^4). This class strictly encompasses the completely contractible graphs [Levy/Low88]

    Hypergraph Partitioning in the Cloud

    Get PDF
    The thesis investigates the partitioning and load balancing problem which has many applications in High Performance Computing (HPC). The application to be partitioned is described with a graph or hypergraph. The latter is of greater interest as hypergraphs, compared to graphs, have a more general structure and can be used to model more complex relationships between groups of objects such as non-symmetric dependencies. Optimal graph and hypergraph partitioning is known to be NP-Hard but good polynomial time heuristic algorithms have been proposed. In this thesis, we propose two multi-level hypergraph partitioning algorithms. The algorithms are based on rough set clustering techniques. The first algorithm, which is a serial algorithm, obtains high quality partitionings and improves the partitioning cut by up to 71\% compared to the state-of-the-art serial hypergraph partitioning algorithms. Furthermore, the capacity of serial algorithms is limited due to the rapid growth of problem sizes of distributed applications. Consequently, we also propose a parallel hypergraph partitioning algorithm. Considering the generality of the hypergraph model, designing a parallel algorithm is difficult and the available parallel hypergraph algorithms offer less scalability compared to their graph counterparts. The issue is twofold: the parallel algorithm and the complexity of the hypergraph structure. Our parallel algorithm provides a trade-off between global and local vertex clustering decisions. By employing novel techniques and approaches, our algorithm achieves better scalability than the state-of-the-art parallel hypergraph partitioner in the Zoltan tool on a set of benchmarks, especially ones with irregular structure. Furthermore, recent advances in cloud computing and the services they provide have led to a trend in moving HPC and large scale distributed applications into the cloud. Despite its advantages, some aspects of the cloud, such as limited network resources, present a challenge to running communication-intensive applications and make them non-scalable in the cloud. While hypergraph partitioning is proposed as a solution for decreasing the communication overhead within parallel distributed applications, it can also offer advantages for running these applications in the cloud. The partitioning is usually done as a pre-processing step before running the parallel application. As parallel hypergraph partitioning itself is a communication-intensive operation, running it in the cloud is hard and suffers from poor scalability. The thesis also investigates the scalability of parallel hypergraph partitioning algorithms in the cloud, the challenges they present, and proposes solutions to improve the cost/performance ratio for running the partitioning problem in the cloud. Our algorithms are implemented as a new hypergraph partitioning package within Zoltan. It is an open source Linux-based toolkit for parallel partitioning, load balancing and data-management designed at Sandia National Labs. The algorithms are known as FEHG and PFEHG algorithms
    corecore