39 research outputs found

    COCO: Performance Assessment

    Full text link
    We present an any-time performance assessment for benchmarking numerical optimization algorithms in a black-box scenario, applied within the COCO benchmarking platform. The performance assessment is based on runtimes measured in number of objective function evaluations to reach one or several quality indicator target values. We argue that runtime is the only available measure with a generic, meaningful, and quantitative interpretation. We discuss the choice of the target values, runlength-based targets, and the aggregation of results by using simulated restarts, averages, and empirical distribution functions

    Biobjective Performance Assessment with the COCO Platform

    Full text link
    This document details the rationales behind assessing the performance of numerical black-box optimizers on multi-objective problems within the COCO platform and in particular on the biobjective test suite bbob-biobj. The evaluation is based on a hypervolume of all non-dominated solutions in the archive of candidate solutions and measures the runtime until the hypervolume value succeeds prescribed target values

    Mixed-Integer Benchmark Problems for Single-and Bi-Objective Optimization

    Get PDF
    International audienceWe introduce two suites of mixed-integer benchmark problems to be used for analyzing and comparing black-box optimization algorithms. They contain problems of diverse difficulties that are scalable in the number of decision variables. The bbob-mixint suite is designed by partially discretizing the established BBOB (Black-Box Optimization Benchmarking) problems. The bi-objective problems from the bbob-biobj-mixint suite are, on the other hand, constructed by using the bbob-mixint functions as their separate objectives. We explain the rationale behind our design decisions and show how to use the suites within the COCO (Comparing Continuous Optimizers) platform. Analyzing two chosen functions in more detail, we also provide some unexpected findings about their properties

    Anytime Benchmarking of Budget-Dependent Algorithms with the COCO Platform

    Get PDF
    International audienceAnytime performance assessment of black-box optimization algorithms assumes that the performance of an algorithm at a specific time does not depend on the total budget of function evaluations at its disposal. It therefore should not be used for benchmarking budget-depending algorithms, i.e., algorithms whose performance depends on the total budget of function evaluations, such as some surrogate-assisted or hybrid algorithms. This paper presents an anytime bench-marking approach suited for budget-depending algorithms. The approach is illustrated on a budget-dependent variant of the Differential Evolution algorithm

    COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting

    Get PDF
    We introduce COCO, an open source platform for Comparing Continuous Optimizers in a black-box setting. COCO aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. The platform and the underlying methodology allow to benchmark in the same framework deterministic and stochastic solvers for both single and multiobjective optimization. We present the rationales behind the (decade-long) development of the platform as a general proposition for guidelines towards better benchmarking. We detail underlying fundamental concepts of COCO such as the definition of a problem as a function instance, the underlying idea of instances, the use of target values, and runtime defined by the number of function calls as the central performance measure. Finally, we give a quick overview of the basic code structure and the currently available test suites.Comment: Optimization Methods and Software, Taylor & Francis, In press, pp.1-3

    Comparing Solutions under Uncertainty in Multiobjective Optimization

    Get PDF
    Due to various reasons the solutions in real-world optimization problems cannot always be exactly evaluated but are sometimes represented with approximated values and confidence intervals. In order to address this issue, the comparison of solutions has to be done differently than for exactly evaluated solutions. In this paper, we define new relations under uncertainty between solutions in multiobjective optimization that are represented with approximated values and confidence intervals. The new relations extend the Pareto dominance relations, can handle constraints, and can be used to compare solutions, both with and without the confidence interval. We also show that by including confidence intervals into the comparisons, the possibility of incorrect comparisons, due to inaccurate approximations, is reduced. Without considering confidence intervals, the comparison of inaccurately approximated solutions can result in the promising solutions being rejected and the worse ones preserved. The effect of new relations in the comparison of solutions in a multiobjective optimization algorithm is also demonstrated

    Using Aggregation to Improve the Scheduling of Flexible Energy Offers

    Get PDF

    Benchmarking the Pure Random Search on the Bi-objective BBOB-2016 Testbed

    Get PDF
    International audienceThe Comparing Continuous Optimizers platform COCO has become a standard for benchmarking numerical (single-objective) optimization algorithms effortlessly. In 2016, COCO has been extended towards multi-objective optimization by providing a first bi-objective test suite. To provide a baseline, we benchmark a pure random search on this bi-objective bbob-biobj test suite of the COCO platform. For each combination of function, dimension n, and instance of the test suite, 106n10^6 · n candidate solutions are sampled uniformly within the sampling box [5,5]n[−5, 5]^n

    Benchmarking MATLAB's gamultiobj (NSGA-II) on the Bi-objective BBOB-2016 Test Suite

    Get PDF
    International audienceIn this paper, we benchmark a variant of the well-known NSGA-II algorithm of Deb et al. on the biobjective bbob-biobj test suite of the Comparing Continuous Optimizers platform COCO. To this end, we employ the implementation of MATLAB's gamultiobj toolbox with its default settings and a population size of 100

    Unveiling evolutionary algorithm representation with DU maps

    Get PDF
    Evolutionary algorithms (EAs) have proven to be effective in tackling problems in many different domains. However, users are often required to spend a significant amount of effort in fine-tuning the EA parameters in order to make the algorithm work. In principle, visualization tools may be of great help in this laborious task, but current visualization tools are either EA-specific, and hence hardly available to all users, or too general to convey detailed information. In this work, we study the Diversity and Usage map (DU map), a compact visualization for analyzing a key component of every EA, the representation of solutions. In a single heat map, the DU map visualizes for entire runs how diverse the genotype is across the population and to which degree each gene in the genotype contributes to the solution. We demonstrate the generality of the DU map concept by applying it to six EAs that use different representations (bit and integer strings, trees, ensembles of trees, and neural networks). We present the results of an online user study about the usability of the DU map which confirm the suitability of the proposed tool and provide important insights on our design choices. By providing a visualization tool that can be easily tailored by specifying the diversity (D) and usage (U) functions, the DU map aims at being a powerful analysis tool for EAs practitioners, making EAs more transparent and hence lowering the barrier for their use
    corecore