130 research outputs found
Non-elitist Evolutionary Multi-objective Optimizers Revisited
Since around 2000, it has been considered that elitist evolutionary
multi-objective optimization algorithms (EMOAs) always outperform non-elitist
EMOAs. This paper revisits the performance of non-elitist EMOAs for
bi-objective continuous optimization when using an unbounded external archive.
This paper examines the performance of EMOAs with two elitist and one
non-elitist environmental selections. The performance of EMOAs is evaluated on
the bi-objective BBOB problem suite provided by the COCO platform. In contrast
to conventional wisdom, results show that non-elitist EMOAs with particular
crossover methods perform significantly well on the bi-objective BBOB problems
with many decision variables when using the unbounded external archive. This
paper also analyzes the properties of the non-elitist selection.Comment: This is an accepted version of a paper published in the proceedings
of GECCO 201
Solving the G-problems in less than 500 iterations: Improved efficient constrained optimization by surrogate modeling and adaptive parameter control
Constrained optimization of high-dimensional numerical problems plays an
important role in many scientific and industrial applications. Function
evaluations in many industrial applications are severely limited and no
analytical information about objective function and constraint functions is
available. For such expensive black-box optimization tasks, the constraint
optimization algorithm COBRA was proposed, making use of RBF surrogate modeling
for both the objective and the constraint functions. COBRA has shown remarkable
success in solving reliably complex benchmark problems in less than 500
function evaluations. Unfortunately, COBRA requires careful adjustment of
parameters in order to do so.
In this work we present a new self-adjusting algorithm SACOBRA, which is
based on COBRA and capable to achieve high-quality results with very few
function evaluations and no parameter tuning. It is shown with the help of
performance profiles on a set of benchmark problems (G-problems, MOPTA08) that
SACOBRA consistently outperforms any COBRA algorithm with fixed parameter
setting. We analyze the importance of the several new elements in SACOBRA and
find that each element of SACOBRA plays a role to boost up the overall
optimization performance. We discuss the reasons behind and get in this way a
better understanding of high-quality RBF surrogate modeling
A Black-Box Discrete Optimization Benchmarking (BB-DOB) Pipeline Survey: Taxonomy, Evaluation, and Ranking
This paper provides a taxonomical identification survey of classes in discrete optimization challenges that can be found in the literature including a proposed pipeline for benchmarking, inspired by previous computational optimization competitions. Thereby, a Black-Box Discrete Optimization Benchmarking (BB-DOB) perspective is presented for the BB-DOB@GECCO Workshop. It is motivated why certain classes together with their properties (like deception and separability or toy problem label) should be included in the perspective. Moreover, guidelines on how to select significant instances within these classes, the design of experiments setup, performance measures, and presentation methods and formats are discussed.authorsversio
- …