1,149 research outputs found
A Review of Surrogate Assisted Multiobjective Evolutionary Algorithms
Multiobjective evolutionary algorithms have incorporated surrogate models in order to reduce the number of required evaluations to approximate the Pareto front of computationally expensive multiobjective optimization problems. Currently, few works have reviewed the state of the art in this topic. However, the existing reviews have focused on classifying the evolutionary multiobjective optimization algorithms with respect to the type of underlying surrogate model. In this paper, we center our focus on classifying multiobjective evolutionary algorithms with respect to their integration with surrogate models. This interaction has led us to classify similar approaches and identify advantages and disadvantages of each class
A Review of Surrogate Assisted Multiobjective Evolutionary Algorithms
Multiobjective evolutionary algorithms have incorporated surrogate models in order to reduce the number of required evaluations to approximate the Pareto front of computationally expensive multiobjective optimization problems. Currently, few works have reviewed the state of the art in this topic. However, the existing reviews have focused on classifying the evolutionary multiobjective optimization algorithms with respect to the type of underlying surrogate model. In this paper, we center our focus on classifying multiobjective evolutionary algorithms with respect to their integration with surrogate models. This interaction has led us to classify similar approaches and identify advantages and disadvantages of each class
Scalarizing Functions in Bayesian Multiobjective Optimization
Scalarizing functions have been widely used to convert a multiobjective
optimization problem into a single objective optimization problem. However,
their use in solving (computationally) expensive multi- and many-objective
optimization problems in Bayesian multiobjective optimization is scarce.
Scalarizing functions can play a crucial role on the quality and number of
evaluations required when doing the optimization. In this article, we study and
review 15 different scalarizing functions in the framework of Bayesian
multiobjective optimization and build Gaussian process models (as surrogates,
metamodels or emulators) on them. We use expected improvement as infill
criterion (or acquisition function) to update the models. In particular, we
compare different scalarizing functions and analyze their performance on
several benchmark problems with different number of objectives to be optimized.
The review and experiments on different functions provide useful insights when
using and selecting a scalarizing function when using a Bayesian multiobjective
optimization method
Evolutionary Multiobjective Optimization Driven by Generative Adversarial Networks (GANs)
Recently, increasing works have proposed to drive evolutionary algorithms
using machine learning models. Usually, the performance of such model based
evolutionary algorithms is highly dependent on the training qualities of the
adopted models. Since it usually requires a certain amount of data (i.e. the
candidate solutions generated by the algorithms) for model training, the
performance deteriorates rapidly with the increase of the problem scales, due
to the curse of dimensionality. To address this issue, we propose a
multi-objective evolutionary algorithm driven by the generative adversarial
networks (GANs). At each generation of the proposed algorithm, the parent
solutions are first classified into real and fake samples to train the GANs;
then the offspring solutions are sampled by the trained GANs. Thanks to the
powerful generative ability of the GANs, our proposed algorithm is capable of
generating promising offspring solutions in high-dimensional decision space
with limited training data. The proposed algorithm is tested on 10 benchmark
problems with up to 200 decision variables. Experimental results on these test
problems demonstrate the effectiveness of the proposed algorithm
Fast calculation of multiobjective probability of improvement and expected improvement criteria for Pareto optimization
The use of surrogate based optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of competitive solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of weighting and aggregating the costs upfront). Most of the work in multiobjective optimization is focused on multiobjective evolutionary algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as multiobjective surrogate-based optimization, may prove to be even more worthwhile than SBO methods to expedite the optimization of computational expensive systems. In this paper, the authors propose the efficient multiobjective optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the probability of improvement and expected improvement criteria to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II, SPEA2 and SMS-EMOA multiobjective optimization methods
Which Surrogate Works for Empirical Performance Modelling? A Case Study with Differential Evolution
It is not uncommon that meta-heuristic algorithms contain some intrinsic
parameters, the optimal configuration of which is crucial for achieving their
peak performance. However, evaluating the effectiveness of a configuration is
expensive, as it involves many costly runs of the target algorithm. Perhaps
surprisingly, it is possible to build a cheap-to-evaluate surrogate that models
the algorithm's empirical performance as a function of its parameters. Such
surrogates constitute an important building block for understanding algorithm
performance, algorithm portfolio/selection, and the automatic algorithm
configuration. In principle, many off-the-shelf machine learning techniques can
be used to build surrogates. In this paper, we take the differential evolution
(DE) as the baseline algorithm for proof-of-concept study. Regression models
are trained to model the DE's empirical performance given a parameter
configuration. In particular, we evaluate and compare four popular regression
algorithms both in terms of how well they predict the empirical performance
with respect to a particular parameter configuration, and also how well they
approximate the parameter versus the empirical performance landscapes
- …