377 research outputs found

    Evolution strategies for robust optimization

    Get PDF
    Real-world (black-box) optimization problems often involve various types of uncertainties and noise emerging in different parts of the optimization problem. When this is not accounted for, optimization may fail or may yield solutions that are optimal in the classical strict notion of optimality, but fail in practice. Robust optimization is the practice of optimization that actively accounts for uncertainties and/or noise. Evolutionary Algorithms form a class of optimization algorithms that use the principle of evolution to find good solutions to optimization problems. Because uncertainty and noise are indispensable parts of nature, this class of optimization algorithms seems to be a logical choice for robust optimization scenarios. This thesis provides a clear definition of the term robust optimization and a comparison and practical guidelines on how Evolution Strategies, a subclass of Evolutionary Algorithms for real-parameter optimization problems, should be adapted for such scenarios.UBL - phd migration 201

    A new Taxonomy of Continuous Global Optimization Algorithms

    Full text link
    Surrogate-based optimization, nature-inspired metaheuristics, and hybrid combinations have become state of the art in algorithm design for solving real-world optimization problems. Still, it is difficult for practitioners to get an overview that explains their advantages in comparison to a large number of available methods in the scope of optimization. Available taxonomies lack the embedding of current approaches in the larger context of this broad field. This article presents a taxonomy of the field, which explores and matches algorithm strategies by extracting similarities and differences in their search strategies. A particular focus lies on algorithms using surrogates, nature-inspired designs, and those created by design optimization. The extracted features of components or operators allow us to create a set of classification indicators to distinguish between a small number of classes. The features allow a deeper understanding of components of the search strategies and further indicate the close connections between the different algorithm designs. We present intuitive analogies to explain the basic principles of the search algorithms, particularly useful for novices in this research field. Furthermore, this taxonomy allows recommendations for the applicability of the corresponding algorithms.Comment: 35 pages total, 28 written pages, 4 figures, 2019 Reworked Versio

    An Investigation of Factors Influencing Algorithm Selection for High Dimensional Continuous Optimisation Problems

    Get PDF
    The problem of algorithm selection is of great importance to the optimisation community, with a number of publications present in the Body-of-Knowledge. This importance stems from the consequences of the No-Free-Lunch Theorem which states that there cannot exist a single algorithm capable of solving all possible problems. However, despite this importance, the algorithm selection problem has of yet failed to gain widespread attention . In particular, little to no work in this area has been carried out with a focus on large-scale optimisation; a field quickly gaining momentum in line with advancements and influence of big data processing. As such, it is not as yet clear as to what factors, if any, influence the selection of algorithms for very high-dimensional problems (> 1000) - and it is entirely possible that algorithms that may not work well in lower dimensions may in fact work well in much higher dimensional spaces and vice-versa. This work therefore aims to begin addressing this knowledge gap by investigating some of these influencing factors for some common metaheuristic variants. To this end, typical parameters native to several metaheuristic algorithms are firstly tuned using the state-of-the-art automatic parameter tuner, SMAC. Tuning produces separate parameter configurations of each metaheuristic for each of a set of continuous benchmark functions; specifically, for every algorithm-function pairing, configurations are found for each dimensionality of the function from a geometrically increasing scale (from 2 to 1500 dimensions). The nature of this tuning is therefore highly computationally expensive necessitating the use of SMAC. Using these sets of parameter configurations, a vast amount of performance data relating to the large-scale optimisation of our benchmark suite by each metaheuristic was subsequently generated. From the generated data and its analysis, several behaviours presented by the metaheuristics as applied to large-scale optimisation have been identified and discussed. Further, this thesis provides a concise review of the relevant literature for the consumption of other researchers looking to progress in this area in addition to the large volume of data produced, relevant to the large-scale optimisation of our benchmark suite by the applied set of common metaheuristics. All work presented in this thesis was funded by EPSRC grant: EP/J017515/1 through the DAASE project

    Tools for Landscape Analysis of Optimisation Problems in Procedural Content Generation for Games

    Get PDF
    The term Procedural Content Generation (PCG) refers to the (semi-)automatic generation of game content by algorithmic means, and its methods are becoming increasingly popular in game-oriented research and industry. A special class of these methods, which is commonly known as search-based PCG, treats the given task as an optimisation problem. Such problems are predominantly tackled by evolutionary algorithms. We will demonstrate in this paper that obtaining more information about the defined optimisation problem can substantially improve our understanding of how to approach the generation of content. To do so, we present and discuss three efficient analysis tools, namely diagonal walks, the estimation of high-level properties, as well as problem similarity measures. We discuss the purpose of each of the considered methods in the context of PCG and provide guidelines for the interpretation of the results received. This way we aim to provide methods for the comparison of PCG approaches and eventually, increase the quality and practicality of generated content in industry.Comment: 30 pages, 8 figures, accepted for publication in Applied Soft Computin

    Optimising Antibiotic Treatments using Evolutionary Algorithms

    Get PDF
    Antimicrobial resistance is one of the biggest threats to global health, food security, and development. Antibiotic overuse and misuse are the main drivers for the emergence of resistance. Studies in the medical sphere have indicated that shortened antibiotic treatments can be as effective as standard fixed-dose ones and have shown that an initial higher dose followed by a lower maintenance dose are more beneficial to patients with critical illnesses. It is crucial to optimise the use of existing antibiotics in order to improve medical outcomes, decrease toxicity and reduce the emergence of resistance. We formulate the design of antibiotic dosing regimens as a continuous optimisation problem and use several evolutionary algorithms as the search technique. Regimens are represented as vectors of real numbers encoding daily doses, which can vary across the treatment duration. A stochastic mathematical model of bacterial infections with tuneable resistance levels is used to evaluate the effectiveness of evolved regimens. The main objective is to minimise the treatment failure rate, subject to a constraint on the maximum total antibiotic used. We consider simulations with different levels of bacterial resistance; two ways of administering the drug (orally and intravenously); as well as coinfections with two strains of bacteria. The approach produced effective dosing regimens, with an average improvement in lowering the failure rate 30%, when compared with standard fixed-daily-dose regimens with the same total amount of antibiotic. A general pattern of an optimised treatment is found, where if 2x is the standard daily dose then the optimised treatment follows the 3x mg, followed by several 2x mg with a last dose of x mg. A noise handling technique is used to minimise the runtime of the experiments while maintaining the quality of treatments. The results of this work indicate that clinical studies confirming the effectiveness of this approach could be highly beneficial to future of antibiotic treatments
    • …
    corecore