18 research outputs found

    Biobjective Performance Assessment with the COCO Platform

    Full text link
    This document details the rationales behind assessing the performance of numerical black-box optimizers on multi-objective problems within the COCO platform and in particular on the biobjective test suite bbob-biobj. The evaluation is based on a hypervolume of all non-dominated solutions in the archive of candidate solutions and measures the runtime until the hypervolume value succeeds prescribed target values

    COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting

    Get PDF
    We introduce COCO, an open source platform for Comparing Continuous Optimizers in a black-box setting. COCO aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. The platform and the underlying methodology allow to benchmark in the same framework deterministic and stochastic solvers for both single and multiobjective optimization. We present the rationales behind the (decade-long) development of the platform as a general proposition for guidelines towards better benchmarking. We detail underlying fundamental concepts of COCO such as the definition of a problem as a function instance, the underlying idea of instances, the use of target values, and runtime defined by the number of function calls as the central performance measure. Finally, we give a quick overview of the basic code structure and the currently available test suites.Comment: Optimization Methods and Software, Taylor & Francis, In press, pp.1-3

    COCO: Performance Assessment

    Full text link
    We present an any-time performance assessment for benchmarking numerical optimization algorithms in a black-box scenario, applied within the COCO benchmarking platform. The performance assessment is based on runtimes measured in number of objective function evaluations to reach one or several quality indicator target values. We argue that runtime is the only available measure with a generic, meaningful, and quantitative interpretation. We discuss the choice of the target values, runlength-based targets, and the aggregation of results by using simulated restarts, averages, and empirical distribution functions

    Benchmarking MATLAB's gamultiobj (NSGA-II) on the Bi-objective BBOB-2016 Test Suite

    Get PDF
    International audienceIn this paper, we benchmark a variant of the well-known NSGA-II algorithm of Deb et al. on the biobjective bbob-biobj test suite of the Comparing Continuous Optimizers platform COCO. To this end, we employ the implementation of MATLAB's gamultiobj toolbox with its default settings and a population size of 100

    Mixed-Integer Benchmark Problems for Single-and Bi-Objective Optimization

    Get PDF
    International audienceWe introduce two suites of mixed-integer benchmark problems to be used for analyzing and comparing black-box optimization algorithms. They contain problems of diverse difficulties that are scalable in the number of decision variables. The bbob-mixint suite is designed by partially discretizing the established BBOB (Black-Box Optimization Benchmarking) problems. The bi-objective problems from the bbob-biobj-mixint suite are, on the other hand, constructed by using the bbob-mixint functions as their separate objectives. We explain the rationale behind our design decisions and show how to use the suites within the COCO (Comparing Continuous Optimizers) platform. Analyzing two chosen functions in more detail, we also provide some unexpected findings about their properties

    Benchmarking the Pure Random Search on the Bi-objective BBOB-2016 Testbed

    Get PDF
    International audienceThe Comparing Continuous Optimizers platform COCO has become a standard for benchmarking numerical (single-objective) optimization algorithms effortlessly. In 2016, COCO has been extended towards multi-objective optimization by providing a first bi-objective test suite. To provide a baseline, we benchmark a pure random search on this bi-objective bbob-biobj test suite of the COCO platform. For each combination of function, dimension n, and instance of the test suite, 106⋅n10^6 · n candidate solutions are sampled uniformly within the sampling box [−5,5]n[−5, 5]^n

    Multi-objective analysis of machine learning algorithms using model-based optimization techniques

    Get PDF
    My dissertation deals with the research areas optimization and machine learning. However, both of them are too extensive to be covered by a single person in a single work, and that is not the goal of my work either. Therefore, my dissertation focuses on interactions between these fields. On the one hand, most machine learning algorithms rely on optimization techniques. First, the training of a learner often implies an optimization. This is demonstrated by the SVM, where the weighted sum of the margin size and the sum of margin violations has to be optimized. Many other learners internally optimize either a least-squares or a maximum likelihood problem. Second, the performance of most machine learning algorithms depends on a set of hyper-parameters and an optimization has to be conducted in order to find the best performing model. Unfortunately, there is no globally accepted optimization algorithm for hyper-parameter tuning problems, and in practice naive algorithms like random or grid search are frequently used. On the other hand, some optimization algorithms rely on machine learning models. They are called model-based optimization algorithms and are mostly used to solve expensive optimization problems. During the optimization, the model is iteratively refined and exploited. One of the most challenging tasks here is the choice of the model class. It has to be applicable to the particular parameter space of the OP and to be well suited for modeling the function’s landscape. In this work, I gave special attention to the multi-objective case. In contrast to the single-objective case, where a single best solution is likely to exist, all possible trade-offs between the objectives have to be considered. Hence, not only a single best, but a set of best solutions exists, one for each trade-off. Although approaches for solving multi-objective problems differ from the corresponding approaches for single-objective problems in some parts, other parts can remain unchanged. This is shown for model-based multi-objective optimization algorithms. The last third of this work addresses the field of offline algorithm selection. In online algorithm selection the best algorithm for a problem is selected while solving it. Contrary, offline algorithm selection guesses the best algorithm a-priori. Again, the work focuses on the multi-objective case: An algorithm has to be selected with respect to multiple, conflicting objectives. As with all offline techniques, this selection rule hast to be trained on a set of available training data sets and can only be applied to new data sets that are similar enough to those in the training set
    corecore