8 research outputs found
Mixed-Integer Benchmark Problems for Single-and Bi-Objective Optimization
International audienceWe introduce two suites of mixed-integer benchmark problems to be used for analyzing and comparing black-box optimization algorithms. They contain problems of diverse difficulties that are scalable in the number of decision variables. The bbob-mixint suite is designed by partially discretizing the established BBOB (Black-Box Optimization Benchmarking) problems. The bi-objective problems from the bbob-biobj-mixint suite are, on the other hand, constructed by using the bbob-mixint functions as their separate objectives. We explain the rationale behind our design decisions and show how to use the suites within the COCO (Comparing Continuous Optimizers) platform. Analyzing two chosen functions in more detail, we also provide some unexpected findings about their properties
Multi-surrogate Assisted Efficient Global Optimization for Discrete Problems
Decades of progress in simulation-based surrogate-assisted optimization and
unprecedented growth in computational power have enabled researchers and
practitioners to optimize previously intractable complex engineering problems.
This paper investigates the possible benefit of a concurrent utilization of
multiple simulation-based surrogate models to solve complex discrete
optimization problems. To fulfill this, the so-called Self-Adaptive
Multi-surrogate Assisted Efficient Global Optimization algorithm (SAMA-DiEGO),
which features a two-stage online model management strategy, is proposed and
further benchmarked on fifteen binary-encoded combinatorial and fifteen ordinal
problems against several state-of-the-art non-surrogate or single surrogate
assisted optimization algorithms. Our findings indicate that SAMA-DiEGO can
rapidly converge to better solutions on a majority of the test problems, which
shows the feasibility and advantage of using multiple surrogate models in
optimizing discrete problems
IOHanalyzer: Performance Analysis for Iterative Optimization Heuristic
Benchmarking and performance analysis play an important role in understanding
the behaviour of iterative optimization heuristics (IOHs) such as local search
algorithms, genetic and evolutionary algorithms, Bayesian optimization
algorithms, etc. This task, however, involves manual setup, execution, and
analysis of the experiment on an individual basis, which is laborious and can
be mitigated by a generic and well-designed platform. For this purpose, we
propose IOHanalyzer, a new user-friendly tool for the analysis, comparison, and
visualization of performance data of IOHs.
Implemented in R and C++, IOHanalyzer is fully open source. It is available
on CRAN and GitHub. IOHanalyzer provides detailed statistics about fixed-target
running times and about fixed-budget performance of the benchmarked algorithms
on real-valued, single-objective optimization tasks. Performance aggregation
over several benchmark problems is possible, for example in the form of
empirical cumulative distribution functions. Key advantages of IOHanalyzer over
other performance analysis packages are its highly interactive design, which
allows users to specify the performance measures, ranges, and granularity that
are most useful for their experiments, and the possibility to analyze not only
performance traces, but also the evolution of dynamic state parameters.
IOHanalyzer can directly process performance data from the main benchmarking
platforms, including the COCO platform, Nevergrad, and our own IOHexperimenter.
An R programming interface is provided for users preferring to have a finer
control over the implemented functionalities
Marginal Probability-Based Integer Handling for CMA-ES Tackling Single-and Multi-Objective Mixed-Integer Black-Box Optimization
This study targets the mixed-integer black-box optimization (MI-BBO) problem
where continuous and integer variables should be optimized simultaneously. The
CMA-ES, our focus in this study, is a population-based stochastic search method
that samples solution candidates from a multivariate Gaussian distribution
(MGD), which shows excellent performance in continuous BBO. The parameters of
MGD, mean and (co)variance, are updated based on the evaluation value of
candidate solutions in the CMA-ES. If the CMA-ES is applied to the MI-BBO with
straightforward discretization, however, the variance corresponding to the
integer variables becomes much smaller than the granularity of the
discretization before reaching the optimal solution, which leads to the
stagnation of the optimization. In particular, when binary variables are
included in the problem, this stagnation more likely occurs because the
granularity of the discretization becomes wider, and the existing integer
handling for the CMA-ES does not address this stagnation. To overcome these
limitations, we propose a simple integer handling for the CMA-ES based on
lower-bounding the marginal probabilities associated with the generation of
integer variables in the MGD. The numerical experiments on the MI-BBO benchmark
problems demonstrate the efficiency and robustness of the proposed method.
Furthermore, in order to demonstrate the generality of the idea of the proposed
method, in addition to the single-objective optimization case, we incorporate
it into multi-objective CMA-ES and verify its performance on bi-objective
mixed-integer benchmark problems.Comment: Camera-ready version for ACM Transactions on Evolutionary Learning
and Optimization (TELO). This paper is an extended version of the work
presented in arXiv:2205.1348
Mixed-Integer Benchmark Problems for Single-and Bi-Objective Optimization
submitted to GECCO 2019International audienceWe introduce two suites of mixed-integer benchmark problems to be used for analyzing and comparing black-box optimization algorithms. They contain problems of diverse difficulties that are scalable in the number of decision variables. The bbob-mixint suite is designed by partially discretizing the established BBOB (Black-Box Optimization Benchmarking) problems. The bi-objective problems from the bbob-biobj-mixint suite are, on the other hand, constructed by using the bbob-mixint functions as their separate objectives. We explain the rationale behind our design decisions and show how to use the suites within the COCO (Comparing Continuous Optimizers) platform. Analyzing two chosen functions in more detail, we also provide some unexpected findings about their properties