1,426 research outputs found
Approximating the least hypervolume contributor: NP-hard in general, but fast in practice
The hypervolume indicator is an increasingly popular set measure to compare
the quality of two Pareto sets. The basic ingredient of most hypervolume
indicator based optimization algorithms is the calculation of the hypervolume
contribution of single solutions regarding a Pareto set. We show that exact
calculation of the hypervolume contribution is #P-hard while its approximation
is NP-hard. The same holds for the calculation of the minimal contribution. We
also prove that it is NP-hard to decide whether a solution has the least
hypervolume contribution. Even deciding whether the contribution of a solution
is at most (1+\eps) times the minimal contribution is NP-hard. This implies
that it is neither possible to efficiently find the least contributing solution
(unless ) nor to approximate it (unless ).
Nevertheless, in the second part of the paper we present a fast approximation
algorithm for this problem. We prove that for arbitrarily given \eps,\delta>0
it calculates a solution with contribution at most (1+\eps) times the minimal
contribution with probability at least . Though it cannot run in
polynomial time for all instances, it performs extremely fast on various
benchmark datasets. The algorithm solves very large problem instances which are
intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions)
within a few seconds.Comment: 22 pages, to appear in Theoretical Computer Scienc
Fast calculation of multiobjective probability of improvement and expected improvement criteria for Pareto optimization
The use of surrogate based optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of competitive solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of weighting and aggregating the costs upfront). Most of the work in multiobjective optimization is focused on multiobjective evolutionary algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as multiobjective surrogate-based optimization, may prove to be even more worthwhile than SBO methods to expedite the optimization of computational expensive systems. In this paper, the authors propose the efficient multiobjective optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the probability of improvement and expected improvement criteria to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II, SPEA2 and SMS-EMOA multiobjective optimization methods
Towards efficient multiobjective optimization: multiobjective statistical criterions
The use of Surrogate Based Optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of equivalent solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of making those decisions upfront). Most of the work in multiobjective optimization is focused on MultiObjective Evolutionary Algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as MultiObjective Surrogate-Based Optimization (MOSBO), may prove to be even more worthwhile than SBO methods to expedite the optimization process. In this paper, the authors propose the Efficient Multiobjective Optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the expected improvement and probability of improvement criterions to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II and SPEA2 multiobjective optimization methods with promising results
Efficient Computation of Expected Hypervolume Improvement Using Box Decomposition Algorithms
In the field of multi-objective optimization algorithms, multi-objective
Bayesian Global Optimization (MOBGO) is an important branch, in addition to
evolutionary multi-objective optimization algorithms (EMOAs). MOBGO utilizes
Gaussian Process models learned from previous objective function evaluations to
decide the next evaluation site by maximizing or minimizing an infill
criterion. A common criterion in MOBGO is the Expected Hypervolume Improvement
(EHVI), which shows a good performance on a wide range of problems, with
respect to exploration and exploitation. However, so far it has been a
challenge to calculate exact EHVI values efficiently. In this paper, an
efficient algorithm for the computation of the exact EHVI for a generic case is
proposed. This efficient algorithm is based on partitioning the integration
volume into a set of axis-parallel slices. Theoretically, the upper bound time
complexities are improved from previously and , for two- and
three-objective problems respectively, to , which is
asymptotically optimal. This article generalizes the scheme in higher
dimensional case by utilizing a new hyperbox decomposition technique, which was
proposed by D{\"a}chert et al, EJOR, 2017. It also utilizes a generalization of
the multilayered integration scheme that scales linearly in the number of
hyperboxes of the decomposition. The speed comparison shows that the proposed
algorithm in this paper significantly reduces computation time. Finally, this
decomposition technique is applied in the calculation of the Probability of
Improvement (PoI)
- …