7,143 research outputs found
Multiplicative Approximations, Optimal Hypervolume Distributions, and the Choice of the Reference Point
Many optimization problems arising in applications have to consider several
objective functions at the same time. Evolutionary algorithms seem to be a very
natural choice for dealing with multi-objective problems as the population of
such an algorithm can be used to represent the trade-offs with respect to the
given objective functions. In this paper, we contribute to the theoretical
understanding of evolutionary algorithms for multi-objective problems. We
consider indicator-based algorithms whose goal is to maximize the hypervolume
for a given problem by distributing {\mu} points on the Pareto front. To gain
new theoretical insights into the behavior of hypervolume-based algorithms we
compare their optimization goal to the goal of achieving an optimal
multiplicative approximation ratio. Our studies are carried out for different
Pareto front shapes of bi-objective problems. For the class of linear fronts
and a class of convex fronts, we prove that maximizing the hypervolume gives
the best possible approximation ratio when assuming that the extreme points
have to be included in both distributions of the points on the Pareto front.
Furthermore, we investigate the choice of the reference point on the
approximation behavior of hypervolume-based approaches and examine Pareto
fronts of different shapes by numerical calculations
Biobjective Performance Assessment with the COCO Platform
This document details the rationales behind assessing the performance of
numerical black-box optimizers on multi-objective problems within the COCO
platform and in particular on the biobjective test suite bbob-biobj. The
evaluation is based on a hypervolume of all non-dominated solutions in the
archive of candidate solutions and measures the runtime until the hypervolume
value succeeds prescribed target values
Hypervolume-based Multi-objective Bayesian Optimization with Student-t Processes
Student- processes have recently been proposed as an appealing alternative
non-parameteric function prior. They feature enhanced flexibility and
predictive variance. In this work the use of Student- processes are explored
for multi-objective Bayesian optimization. In particular, an analytical
expression for the hypervolume-based probability of improvement is developed
for independent Student- process priors of the objectives. Its effectiveness
is shown on a multi-objective optimization problem which is known to be
difficult with traditional Gaussian processes.Comment: 5 pages, 3 figure
A Study of Archiving Strategies in Multi-Objective PSO for Molecular Docking
Molecular docking is a complex optimization problem aimed at predicting the position of a ligand molecule in the active site of a receptor with the lowest binding energy. This problem can be formulated as a bi-objective optimization problem by minimizing the binding energy and the Root Mean Square Deviation (RMSD) difference in the coordinates of ligands. In this context, the SMPSO multi-objective swarm-intelligence algorithm has shown a remarkable performance. SMPSO is characterized by having an external archive used to store the non-dominated solutions and also as the basis of the leader selection strategy. In this paper, we analyze several SMPSO variants based on different archiving strategies in the scope of a benchmark of molecular docking instances. Our study reveals that the SMPSOhv, which uses an hypervolume contribution based archive, shows the overall best performance.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Efficient Computation of Expected Hypervolume Improvement Using Box Decomposition Algorithms
In the field of multi-objective optimization algorithms, multi-objective
Bayesian Global Optimization (MOBGO) is an important branch, in addition to
evolutionary multi-objective optimization algorithms (EMOAs). MOBGO utilizes
Gaussian Process models learned from previous objective function evaluations to
decide the next evaluation site by maximizing or minimizing an infill
criterion. A common criterion in MOBGO is the Expected Hypervolume Improvement
(EHVI), which shows a good performance on a wide range of problems, with
respect to exploration and exploitation. However, so far it has been a
challenge to calculate exact EHVI values efficiently. In this paper, an
efficient algorithm for the computation of the exact EHVI for a generic case is
proposed. This efficient algorithm is based on partitioning the integration
volume into a set of axis-parallel slices. Theoretically, the upper bound time
complexities are improved from previously and , for two- and
three-objective problems respectively, to , which is
asymptotically optimal. This article generalizes the scheme in higher
dimensional case by utilizing a new hyperbox decomposition technique, which was
proposed by D{\"a}chert et al, EJOR, 2017. It also utilizes a generalization of
the multilayered integration scheme that scales linearly in the number of
hyperboxes of the decomposition. The speed comparison shows that the proposed
algorithm in this paper significantly reduces computation time. Finally, this
decomposition technique is applied in the calculation of the Probability of
Improvement (PoI)
- …
