61 research outputs found

    Performance analysis of randomised search heuristics operating with a fixed budget

    Get PDF
    Jansen, T., Zarges, C. (2013). Performance analysis of randomised search heuristics operating with a fixed budget. Theoretical Computer Science, 545, 39-58When for a difficult real-world optimisation problem no good problem-specific algorithm is available often randomised search heuristics are used. They are hoped to deliver good solutions in acceptable time. The theoretical analysis usually concentrates on the average time needed to find an optimal or approximately optimal solution. This matches neither the application in practice nor the empirical analysis since usually optimal solutions are not known and even if found cannot be recognised. More often the algorithms are stopped after some time. This motivates a theoretical analysis to concentrate on the quality of the best solution obtained after a pre-specified number of function evaluations called budget. Using this perspective two simple randomised search heuristics, random local search and the (1+1) evolutionary algorithm, are analysed on some well-known example problems. Upper and lower bounds on the expected quality of a solution for a fixed budget of function evaluations are proven. The analysis shows novel and challenging problems in the study of randomised search heuristics. It demonstrates the potential of this shift in perspective from expected run time to expected solution quality.authorsversionPeer reviewe

    Preface

    Get PDF

    Analysis of Randomised Search Heuristics for Dynamic Optimisation

    Get PDF
    Dynamic optimisation is an area of application where randomised search heuristics like evolutionary algorithms and artificial immune systems are often successful. The theoretical foundation of this important topic suffers from a lack of a generally accepted analytical framework as well as a lack of widely accepted example problems. This article tackles both problems by discussing necessary conditions for useful and practically relevant theoretical analysis as well as introducing a concrete family of dynamic example problems that draws inspiration from a well-known static example problem and exhibits a bi-stable dynamic. After the stage has been set this way, the framework is made concrete by presenting the results of thorough theoretical and statistical analysis for mutation-based evolutionary algorithms and artificial immune systems. </jats:p

    A Detailed Study of the Distributed Rough Set Based Locality Sensitive Hashing Feature Selection Technique

    Get PDF
    International audienceIn the context of big data, granular computing has recently been implemented by some mathematical tools, especially Rough Set Theory (RST). As a key topic of rough set theory, feature selection has been investigated to adapt the related granular concepts of RST to deal with large amounts of data, leading to the development of the distributed RST version. However, despite of its scalability, the distributed RST version faces a key challenge tied to the partitioning of the feature search space in the distributed environment while guaranteeing data dependency. Therefore, in this manuscript, we propose a new distributed RST version based on Locality Sensitive Hashing (LSH), named LSH-dRST, for big data feature selection. LSH-dRST uses LSH to match similar features into the same bucket and maps the generated buckets into partitions to enable the splitting of the universe in a more efficient way. More precisely, in this paper, we perform a detailed analysis of the performance of LSH-dRST by comparing it to the standard distributed RST version, which is based on a random partitioning of the universe. We demonstrate that our LSH-dRST is scalable when dealing with large amounts of data. We also demonstrate * This work is part of a project that has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 702527. 2 Z. Chelly Dagdia, C. Zarges / LSH-RST for an Efficient Big Data Pre-processing that LSH-dRST ensures the partitioning of the high dimensional feature search space in a more reliable way; hence better preserving data dependency in the distributed environment and ensuring a lower computational cost

    A Black-Box Discrete Optimization Benchmarking (BB-DOB) Pipeline Survey: Taxonomy, Evaluation, and Ranking

    Get PDF
    This paper provides a taxonomical identification survey of classes in discrete optimization challenges that can be found in the literature including a proposed pipeline for benchmarking, inspired by previous computational optimization competitions. Thereby, a Black-Box Discrete Optimization Benchmarking (BB-DOB) perspective is presented for the BB-DOB@GECCO Workshop. It is motivated why certain classes together with their properties (like deception and separability or toy problem label) should be included in the perspective. Moreover, guidelines on how to select significant instances within these classes, the design of experiments setup, performance measures, and presentation methods and formats are discussed.authorsversio
    corecore