30 research outputs found
COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting
We introduce COCO, an open source platform for Comparing Continuous
Optimizers in a black-box setting. COCO aims at automatizing the tedious and
repetitive task of benchmarking numerical optimization algorithms to the
greatest possible extent. The platform and the underlying methodology allow to
benchmark in the same framework deterministic and stochastic solvers for both
single and multiobjective optimization. We present the rationales behind the
(decade-long) development of the platform as a general proposition for
guidelines towards better benchmarking. We detail underlying fundamental
concepts of COCO such as the definition of a problem as a function instance,
the underlying idea of instances, the use of target values, and runtime defined
by the number of function calls as the central performance measure. Finally, we
give a quick overview of the basic code structure and the currently available
test suites.Comment: Optimization Methods and Software, Taylor & Francis, In press,
pp.1-3
Optimizing non-pharmaceutical intervention strategies against COVID-19 using artificial intelligence
One key task in the early fight against the COVID-19 pandemic was to plan non-pharmaceutical interventions to reduce the spread of the infection while limiting the burden on the society and economy. With more data on the pandemic being generated, it became possible to model both the infection trends and intervention costs, transforming the creation of an intervention plan into a computational optimization problem. This paper proposes a framework developed to help policy-makers plan the best combination of non-pharmaceutical interventions and to change them over time. We developed a hybrid machine-learning epidemiological model to forecast the infection trends, aggregated the socio-economic costs from the literature and expert knowledge, and used a multi-objective optimization algorithm to find and evaluate various intervention plans. The framework is modular and easily adjustable to a real-world situation, it is trained and tested on data collected from almost all countries in the world, and its proposed intervention plans generally outperform those used in real life in terms of both the number of infections and intervention costs
Benchmarking MO-CMA-ES and COMO-CMA-ES on the Bi-objective bbob-biobj Testbed
International audienceIn this paper, we propose a comparative benchmark of MO-CMAES, COMO-CMA-ES (recently introduced in [12]) and NSGA-II,using the COCO framework for performance assessment and the Bi-objective test suite bbob-biobj. For a fixed number of pointsp, COMO-CMA-ES approximates an optimal p-distribution of the Hypervolume Indicator. While not designed to perform on archive-based assessment, i.e. with respect to all points evaluated so far by the algorithm, COMO-CMA-ES behaves well on the COCO platform. The experiments are done in a true Black-Blox spirit by using a minimal setting relative to the information shared by the 55 problems of the bbob-biobj Testbed
Benchmarking Algorithms from the platypus Framework on the Biobjective bbob-biobj Testbed
International audienceOne of the main goals of the COCO platform is to produce, collect , and make available benchmarking performance data sets of optimization algorithms and, more concretely, algorithm implementations. For the recently proposed biobjective bbob-biobj test suite, less than 20 algorithms have been benchmarked so far but many more are available to the public. We therefore aim in this paper to benchmark several available multiobjective optimization algorithms on the bbob-biobj test suite and discuss their performance. We focus here on algorithms implemented in the platypus framework (in Python) whose main advantage is its ease of use without the need to set up many algorithm parameters
Mixed-Integer Benchmark Problems for Single-and Bi-Objective Optimization
submitted to GECCO 2019International audienceWe introduce two suites of mixed-integer benchmark problems to be used for analyzing and comparing black-box optimization algorithms. They contain problems of diverse difficulties that are scalable in the number of decision variables. The bbob-mixint suite is designed by partially discretizing the established BBOB (Black-Box Optimization Benchmarking) problems. The bi-objective problems from the bbob-biobj-mixint suite are, on the other hand, constructed by using the bbob-mixint functions as their separate objectives. We explain the rationale behind our design decisions and show how to use the suites within the COCO (Comparing Continuous Optimizers) platform. Analyzing two chosen functions in more detail, we also provide some unexpected findings about their properties