20,157 research outputs found

    Darwinian Data Structure Selection

    Get PDF
    Data structure selection and tuning is laborious but can vastly improve an application's performance and memory footprint. Some data structures share a common interface and enjoy multiple implementations. We call them Darwinian Data Structures (DDS), since we can subject their implementations to survival of the fittest. We introduce ARTEMIS a multi-objective, cloud-based search-based optimisation framework that automatically finds optimal, tuned DDS modulo a test suite, then changes an application to use that DDS. ARTEMIS achieves substantial performance improvements for \emph{every} project in 55 Java projects from DaCapo benchmark, 88 popular projects and 3030 uniformly sampled projects from GitHub. For execution time, CPU usage, and memory consumption, ARTEMIS finds at least one solution that improves \emph{all} measures for 86%86\% (37/4337/43) of the projects. The median improvement across the best solutions is 4.8%4.8\%, 10.1%10.1\%, 5.1%5.1\% for runtime, memory and CPU usage. These aggregate results understate ARTEMIS's potential impact. Some of the benchmarks it improves are libraries or utility functions. Two examples are gson, a ubiquitous Java serialization framework, and xalan, Apache's XML transformation tool. ARTEMIS improves gson by 16.516.5\%, 1%1\% and 2.2%2.2\% for memory, runtime, and CPU; ARTEMIS improves xalan's memory consumption by 23.523.5\%. \emph{Every} client of these projects will benefit from these performance improvements.Comment: 11 page

    Comparing and Combining Lexicase Selection and Novelty Search

    Full text link
    Lexicase selection and novelty search, two parent selection methods used in evolutionary computation, emphasize exploring widely in the search space more than traditional methods such as tournament selection. However, lexicase selection is not explicitly driven to select for novelty in the population, and novelty search suffers from lack of direction toward a goal, especially in unconstrained, highly-dimensional spaces. We combine the strengths of lexicase selection and novelty search by creating a novelty score for each test case, and adding those novelty scores to the normal error values used in lexicase selection. We use this new novelty-lexicase selection to solve automatic program synthesis problems, and find it significantly outperforms both novelty search and lexicase selection. Additionally, we find that novelty search has very little success in the problem domain of program synthesis. We explore the effects of each of these methods on population diversity and long-term problem solving performance, and give evidence to support the hypothesis that novelty-lexicase selection resists converging to local optima better than lexicase selection

    Porcellio scaber algorithm (PSA) for solving constrained optimization problems

    Full text link
    In this paper, we extend a bio-inspired algorithm called the porcellio scaber algorithm (PSA) to solve constrained optimization problems, including a constrained mixed discrete-continuous nonlinear optimization problem. Our extensive experiment results based on benchmark optimization problems show that the PSA has a better performance than many existing methods or algorithms. The results indicate that the PSA is a promising algorithm for constrained optimization.Comment: 6 pages, 1 figur

    A Hybrid Differential Evolution Approach to Designing Deep Convolutional Neural Networks for Image Classification

    Full text link
    Convolutional Neural Networks (CNNs) have demonstrated their superiority in image classification, and evolutionary computation (EC) methods have recently been surging to automatically design the architectures of CNNs to save the tedious work of manually designing CNNs. In this paper, a new hybrid differential evolution (DE) algorithm with a newly added crossover operator is proposed to evolve the architectures of CNNs of any lengths, which is named DECNN. There are three new ideas in the proposed DECNN method. Firstly, an existing effective encoding scheme is refined to cater for variable-length CNN architectures; Secondly, the new mutation and crossover operators are developed for variable-length DE to optimise the hyperparameters of CNNs; Finally, the new second crossover is introduced to evolve the depth of the CNN architectures. The proposed algorithm is tested on six widely-used benchmark datasets and the results are compared to 12 state-of-the-art methods, which shows the proposed method is vigorously competitive to the state-of-the-art algorithms. Furthermore, the proposed method is also compared with a method using particle swarm optimisation with a similar encoding strategy named IPPSO, and the proposed DECNN outperforms IPPSO in terms of the accuracy.Comment: Accepted by The Australasian Joint Conference on Artificial Intelligence 201
    corecore